Time series / date functionality#
pandas contains extensive capabilities and features for working with time series data for all domains.
Using the NumPy datetime64
and timedelta64
dtypes, pandas has consolidated a large number of
features from other Python libraries like scikits.timeseries
as well as created
a tremendous amount of new functionality for manipulating time series data.
For example, pandas supports:
Parsing time series information from various sources and formats
In [1]: import datetime
In [2]: dti = pd.to_datetime(
...: ["1/1/2018", np.datetime64("2018-01-01"), datetime.datetime(2018, 1, 1)]
...: )
...:
In [3]: dti
Out[3]: DatetimeIndex(['2018-01-01', '2018-01-01', '2018-01-01'], dtype='datetime64[ns]', freq=None)
Generate sequences of fixed-frequency dates and time spans
In [4]: dti = pd.date_range("2018-01-01", periods=3, freq="H")
In [5]: dti
Out[5]:
DatetimeIndex(['2018-01-01 00:00:00', '2018-01-01 01:00:00',
'2018-01-01 02:00:00'],
dtype='datetime64[ns]', freq='H')
Manipulating and converting date times with timezone information
In [6]: dti = dti.tz_localize("UTC")
In [7]: dti
Out[7]:
DatetimeIndex(['2018-01-01 00:00:00+00:00', '2018-01-01 01:00:00+00:00',
'2018-01-01 02:00:00+00:00'],
dtype='datetime64[ns, UTC]', freq='H')
In [8]: dti.tz_convert("US/Pacific")
Out[8]:
DatetimeIndex(['2017-12-31 16:00:00-08:00', '2017-12-31 17:00:00-08:00',
'2017-12-31 18:00:00-08:00'],
dtype='datetime64[ns, US/Pacific]', freq='H')
Resampling or converting a time series to a particular frequency
In [9]: idx = pd.date_range("2018-01-01", periods=5, freq="H")
In [10]: ts = pd.Series(range(len(idx)), index=idx)
In [11]: ts
Out[11]:
2018-01-01 00:00:00 0
2018-01-01 01:00:00 1
2018-01-01 02:00:00 2
2018-01-01 03:00:00 3
2018-01-01 04:00:00 4
Freq: H, dtype: int64
In [12]: ts.resample("2H").mean()
Out[12]:
2018-01-01 00:00:00 0.5
2018-01-01 02:00:00 2.5
2018-01-01 04:00:00 4.0
Freq: 2H, dtype: float64
Performing date and time arithmetic with absolute or relative time increments
In [13]: friday = pd.Timestamp("2018-01-05")
In [14]: friday.day_name()
Out[14]: 'Friday'
# Add 1 day
In [15]: saturday = friday + pd.Timedelta("1 day")
In [16]: saturday.day_name()
Out[16]: 'Saturday'
# Add 1 business day (Friday --> Monday)
In [17]: monday = friday + pd.offsets.BDay()
In [18]: monday.day_name()
Out[18]: 'Monday'
pandas provides a relatively compact and self-contained set of tools for performing the above tasks and more.
Overview#
pandas captures 4 general time related concepts:
Date times: A specific date and time with timezone support. Similar to
datetime.datetime
from the standard library.Time deltas: An absolute time duration. Similar to
datetime.timedelta
from the standard library.Time spans: A span of time defined by a point in time and its associated frequency.
Date offsets: A relative time duration that respects calendar arithmetic. Similar to
dateutil.relativedelta.relativedelta
from thedateutil
package.
Concept |
Scalar Class |
Array Class |
pandas Data Type |
Primary Creation Method |
---|---|---|---|---|
Date times |
|
|
|
|
Time deltas |
|
|
|
|
Time spans |
|
|
|
|
Date offsets |
|
|
|
|
For time series data, it’s conventional to represent the time component in the index of a Series
or DataFrame
so manipulations can be performed with respect to the time element.
In [19]: pd.Series(range(3), index=pd.date_range("2000", freq="D", periods=3))
Out[19]:
2000-01-01 0
2000-01-02 1
2000-01-03 2
Freq: D, dtype: int64
However, Series
and DataFrame
can directly also support the time component as data itself.
In [20]: pd.Series(pd.date_range("2000", freq="D", periods=3))
Out[20]:
0 2000-01-01
1 2000-01-02
2 2000-01-03
dtype: datetime64[ns]
Series
and DataFrame
have extended data type support and functionality for datetime
, timedelta
and Period
data when passed into those constructors. DateOffset
data however will be stored as object
data.
In [21]: pd.Series(pd.period_range("1/1/2011", freq="M", periods=3))
Out[21]:
0 2011-01
1 2011-02
2 2011-03
dtype: period[M]
In [22]: pd.Series([pd.DateOffset(1), pd.DateOffset(2)])
Out[22]:
0 <DateOffset>
1 <2 * DateOffsets>
dtype: object
In [23]: pd.Series(pd.date_range("1/1/2011", freq="M", periods=3))
Out[23]:
0 2011-01-31
1 2011-02-28
2 2011-03-31
dtype: datetime64[ns]
Lastly, pandas represents null date times, time deltas, and time spans as NaT
which
is useful for representing missing or null date like values and behaves similar
as np.nan
does for float data.
In [24]: pd.Timestamp(pd.NaT)
Out[24]: NaT
In [25]: pd.Timedelta(pd.NaT)
Out[25]: NaT
In [26]: pd.Period(pd.NaT)
Out[26]: NaT
# Equality acts as np.nan would
In [27]: pd.NaT == pd.NaT
Out[27]: False
Timestamps vs. time spans#
Timestamped data is the most basic type of time series data that associates values with points in time. For pandas objects it means using the points in time.
In [28]: import datetime
In [29]: pd.Timestamp(datetime.datetime(2012, 5, 1))
Out[29]: Timestamp('2012-05-01 00:00:00')
In [30]: pd.Timestamp("2012-05-01")
Out[30]: Timestamp('2012-05-01 00:00:00')
In [31]: pd.Timestamp(2012, 5, 1)
Out[31]: Timestamp('2012-05-01 00:00:00')
However, in many cases it is more natural to associate things like change
variables with a time span instead. The span represented by Period
can be
specified explicitly, or inferred from datetime string format.
For example:
In [32]: pd.Period("2011-01")
Out[32]: Period('2011-01', 'M')
In [33]: pd.Period("2012-05", freq="D")
Out[33]: Period('2012-05-01', 'D')
Timestamp
and Period
can serve as an index. Lists of
Timestamp
and Period
are automatically coerced to DatetimeIndex
and PeriodIndex
respectively.
In [34]: dates = [
....: pd.Timestamp("2012-05-01"),
....: pd.Timestamp("2012-05-02"),
....: pd.Timestamp("2012-05-03"),
....: ]
....:
In [35]: ts = pd.Series(np.random.randn(3), dates)
In [36]: type(ts.index)
Out[36]: pandas.core.indexes.datetimes.DatetimeIndex
In [37]: ts.index
Out[37]: DatetimeIndex(['2012-05-01', '2012-05-02', '2012-05-03'], dtype='datetime64[ns]', freq=None)
In [38]: ts
Out[38]:
2012-05-01 0.469112
2012-05-02 -0.282863
2012-05-03 -1.509059
dtype: float64
In [39]: periods = [pd.Period("2012-01"), pd.Period("2012-02"), pd.Period("2012-03")]
In [40]: ts = pd.Series(np.random.randn(3), periods)
In [41]: type(ts.index)
Out[41]: pandas.core.indexes.period.PeriodIndex
In [42]: ts.index
Out[42]: PeriodIndex(['2012-01', '2012-02', '2012-03'], dtype='period[M]')
In [43]: ts
Out[43]:
2012-01 -1.135632
2012-02 1.212112
2012-03 -0.173215
Freq: M, dtype: float64
pandas allows you to capture both representations and
convert between them. Under the hood, pandas represents timestamps using
instances of Timestamp
and sequences of timestamps using instances of
DatetimeIndex
. For regular time spans, pandas uses Period
objects for
scalar values and PeriodIndex
for sequences of spans. Better support for
irregular intervals with arbitrary start and end points are forth-coming in
future releases.
Converting to timestamps#
To convert a Series
or list-like object of date-like objects e.g. strings,
epochs, or a mixture, you can use the to_datetime
function. When passed
a Series
, this returns a Series
(with the same index), while a list-like
is converted to a DatetimeIndex
:
In [44]: pd.to_datetime(pd.Series(["Jul 31, 2009", "Jan 10, 2010", None]))
Out[44]:
0 2009-07-31
1 2010-01-10
2 NaT
dtype: datetime64[ns]
In [45]: pd.to_datetime(["2005/11/23", "2010/12/31"])
Out[45]: DatetimeIndex(['2005-11-23', '2010-12-31'], dtype='datetime64[ns]', freq=None)
If you use dates which start with the day first (i.e. European style),
you can pass the dayfirst
flag:
In [46]: pd.to_datetime(["04-01-2012 10:00"], dayfirst=True)
Out[46]: DatetimeIndex(['2012-01-04 10:00:00'], dtype='datetime64[ns]', freq=None)
In [47]: pd.to_datetime(["04-14-2012 10:00"], dayfirst=True)
Out[47]: DatetimeIndex(['2012-04-14 10:00:00'], dtype='datetime64[ns]', freq=None)
Warning
You see in the above example that dayfirst
isn’t strict. If a date
can’t be parsed with the day being first it will be parsed as if
dayfirst
were False
and a warning will also be raised.
If you pass a single string to to_datetime
, it returns a single Timestamp
.
Timestamp
can also accept string input, but it doesn’t accept string parsing
options like dayfirst
or format
, so use to_datetime
if these are required.
In [48]: pd.to_datetime("2010/11/12")
Out[48]: Timestamp('2010-11-12 00:00:00')
In [49]: pd.Timestamp("2010/11/12")
Out[49]: Timestamp('2010-11-12 00:00:00')
You can also use the DatetimeIndex
constructor directly:
In [50]: pd.DatetimeIndex(["2018-01-01", "2018-01-03", "2018-01-05"])
Out[50]: DatetimeIndex(['2018-01-01', '2018-01-03', '2018-01-05'], dtype='datetime64[ns]', freq=None)
The string ‘infer’ can be passed in order to set the frequency of the index as the inferred frequency upon creation:
In [51]: pd.DatetimeIndex(["2018-01-01", "2018-01-03", "2018-01-05"], freq="infer")
Out[51]: DatetimeIndex(['2018-01-01', '2018-01-03', '2018-01-05'], dtype='datetime64[ns]', freq='2D')
Providing a format argument#
In addition to the required datetime string, a format
argument can be passed to ensure specific parsing.
This could also potentially speed up the conversion considerably.
In [52]: pd.to_datetime("2010/11/12", format="%Y/%m/%d")
Out[52]: Timestamp('2010-11-12 00:00:00')
In [53]: pd.to_datetime("12-11-2010 00:00", format="%d-%m-%Y %H:%M")
Out[53]: Timestamp('2010-11-12 00:00:00')
For more information on the choices available when specifying the format
option, see the Python datetime documentation.
Assembling datetime from multiple DataFrame columns#
You can also pass a DataFrame
of integer or string columns to assemble into a Series
of Timestamps
.
In [54]: df = pd.DataFrame(
....: {"year": [2015, 2016], "month": [2, 3], "day": [4, 5], "hour": [2, 3]}
....: )
....:
In [55]: pd.to_datetime(df)
Out[55]:
0 2015-02-04 02:00:00
1 2016-03-05 03:00:00
dtype: datetime64[ns]
You can pass only the columns that you need to assemble.
In [56]: pd.to_datetime(df[["year", "month", "day"]])
Out[56]:
0 2015-02-04
1 2016-03-05
dtype: datetime64[ns]
pd.to_datetime
looks for standard designations of the datetime component in the column names, including:
required:
year
,month
,day
optional:
hour
,minute
,second
,millisecond
,microsecond
,nanosecond
Invalid data#
The default behavior, errors='raise'
, is to raise when unparsable:
In [57]: pd.to_datetime(['2009/07/31', 'asd'], errors='raise')
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
Cell In[57], line 1
----> 1 pd.to_datetime(['2009/07/31', 'asd'], errors='raise')
File ~/work/pandas/pandas/pandas/core/tools/datetimes.py:1144, in to_datetime(arg, errors, dayfirst, yearfirst, utc, format, exact, unit, infer_datetime_format, origin, cache)
1142 result = _convert_and_box_cache(argc, cache_array)
1143 else:
-> 1144 result = convert_listlike(argc, format)
1145 else:
1146 result = convert_listlike(np.array([arg]), format)[0]
File ~/work/pandas/pandas/pandas/core/tools/datetimes.py:488, in _convert_listlike_datetimes(arg, format, name, utc, unit, errors, dayfirst, yearfirst, exact)
486 # `format` could be inferred, or user didn't ask for mixed-format parsing.
487 if format is not None and format != "mixed":
--> 488 return _array_strptime_with_fallback(arg, name, utc, format, exact, errors)
490 result, tz_parsed = objects_to_datetime64ns(
491 arg,
492 dayfirst=dayfirst,
(...)
496 allow_object=True,
497 )
499 if tz_parsed is not None:
500 # We can take a shortcut since the datetime64 numpy array
501 # is in UTC
File ~/work/pandas/pandas/pandas/core/tools/datetimes.py:519, in _array_strptime_with_fallback(arg, name, utc, fmt, exact, errors)
508 def _array_strptime_with_fallback(
509 arg,
510 name,
(...)
514 errors: str,
515 ) -> Index:
516 """
517 Call array_strptime, with fallback behavior depending on 'errors'.
518 """
--> 519 result, timezones = array_strptime(arg, fmt, exact=exact, errors=errors, utc=utc)
520 if any(tz is not None for tz in timezones):
521 return _return_parsed_timezone_results(result, timezones, utc, name)
File strptime.pyx:534, in pandas._libs.tslibs.strptime.array_strptime()
File strptime.pyx:355, in pandas._libs.tslibs.strptime.array_strptime()
ValueError: time data "asd" doesn't match format "%Y/%m/%d", at position 1. You might want to try:
- passing `format` if your strings have a consistent format;
- passing `format='ISO8601'` if your strings are all ISO8601 but not necessarily in exactly the same format;
- passing `format='mixed'`, and the format will be inferred for each element individually. You might want to use `dayfirst` alongside this.
Pass errors='ignore'
to return the original input when unparsable:
In [58]: pd.to_datetime(["2009/07/31", "asd"], errors="ignore")
Out[58]: Index(['2009/07/31', 'asd'], dtype='object')
Pass errors='coerce'
to convert unparsable data to NaT
(not a time):
In [59]: pd.to_datetime(["2009/07/31", "asd"], errors="coerce")
Out[59]: DatetimeIndex(['2009-07-31', 'NaT'], dtype='datetime64[ns]', freq=None)
Epoch timestamps#
pandas supports converting integer or float epoch times to Timestamp
and
DatetimeIndex
. The default unit is nanoseconds, since that is how Timestamp
objects are stored internally. However, epochs are often stored in another unit
which can be specified. These are computed from the starting point specified by the
origin
parameter.
In [60]: pd.to_datetime(
....: [1349720105, 1349806505, 1349892905, 1349979305, 1350065705], unit="s"
....: )
....:
Out[60]:
DatetimeIndex(['2012-10-08 18:15:05', '2012-10-09 18:15:05',
'2012-10-10 18:15:05', '2012-10-11 18:15:05',
'2012-10-12 18:15:05'],
dtype='datetime64[ns]', freq=None)
In [61]: pd.to_datetime(
....: [1349720105100, 1349720105200, 1349720105300, 1349720105400, 1349720105500],
....: unit="ms",
....: )
....:
Out[61]:
DatetimeIndex(['2012-10-08 18:15:05.100000', '2012-10-08 18:15:05.200000',
'2012-10-08 18:15:05.300000', '2012-10-08 18:15:05.400000',
'2012-10-08 18:15:05.500000'],
dtype='datetime64[ns]', freq=None)
Note
The unit
parameter does not use the same strings as the format
parameter
that was discussed above). The
available units are listed on the documentation for pandas.to_datetime()
.
Constructing a Timestamp
or DatetimeIndex
with an epoch timestamp
with the tz
argument specified will raise a ValueError. If you have
epochs in wall time in another timezone, you can read the epochs
as timezone-naive timestamps and then localize to the appropriate timezone:
In [62]: pd.Timestamp(1262347200000000000).tz_localize("US/Pacific")
Out[62]: Timestamp('2010-01-01 12:00:00-0800', tz='US/Pacific')
In [63]: pd.DatetimeIndex([1262347200000000000]).tz_localize("US/Pacific")
Out[63]: DatetimeIndex(['2010-01-01 12:00:00-08:00'], dtype='datetime64[ns, US/Pacific]', freq=None)
Note
Epoch times will be rounded to the nearest nanosecond.
Warning
Conversion of float epoch times can lead to inaccurate and unexpected results.
Python floats have about 15 digits precision in
decimal. Rounding during conversion from float to high precision Timestamp
is
unavoidable. The only way to achieve exact precision is to use a fixed-width
types (e.g. an int64).
In [64]: pd.to_datetime([1490195805.433, 1490195805.433502912], unit="s")
Out[64]: DatetimeIndex(['2017-03-22 15:16:45.433000088', '2017-03-22 15:16:45.433502913'], dtype='datetime64[ns]', freq=None)
In [65]: pd.to_datetime(1490195805433502912, unit="ns")
Out[65]: Timestamp('2017-03-22 15:16:45.433502912')
See also
From timestamps to epoch#
To invert the operation from above, namely, to convert from a Timestamp
to a ‘unix’ epoch:
In [66]: stamps = pd.date_range("2012-10-08 18:15:05", periods=4, freq="D")
In [67]: stamps
Out[67]:
DatetimeIndex(['2012-10-08 18:15:05', '2012-10-09 18:15:05',
'2012-10-10 18:15:05', '2012-10-11 18:15:05'],
dtype='datetime64[ns]', freq='D')
We subtract the epoch (midnight at January 1, 1970 UTC) and then floor divide by the “unit” (1 second).
In [68]: (stamps - pd.Timestamp("1970-01-01")) // pd.Timedelta("1s")
Out[68]: Index([1349720105, 1349806505, 1349892905, 1349979305], dtype='int64')
Using the origin
parameter#
Using the origin
parameter, one can specify an alternative starting point for creation
of a DatetimeIndex
. For example, to use 1960-01-01 as the starting date:
In [69]: pd.to_datetime([1, 2, 3], unit="D", origin=pd.Timestamp("1960-01-01"))
Out[69]: DatetimeIndex(['1960-01-02', '1960-01-03', '1960-01-04'], dtype='datetime64[ns]', freq=None)
The default is set at origin='unix'
, which defaults to 1970-01-01 00:00:00
.
Commonly called ‘unix epoch’ or POSIX time.
In [70]: pd.to_datetime([1, 2, 3], unit="D")
Out[70]: DatetimeIndex(['1970-01-02', '1970-01-03', '1970-01-04'], dtype='datetime64[ns]', freq=None)
Generating ranges of timestamps#
To generate an index with timestamps, you can use either the DatetimeIndex
or
Index
constructor and pass in a list of datetime objects:
In [71]: dates = [
....: datetime.datetime(2012, 5, 1),
....: datetime.datetime(2012, 5, 2),
....: datetime.datetime(2012, 5, 3),
....: ]
....:
# Note the frequency information
In [72]: index = pd.DatetimeIndex(dates)
In [73]: index
Out[73]: DatetimeIndex(['2012-05-01', '2012-05-02', '2012-05-03'], dtype='datetime64[ns]', freq=None)
# Automatically converted to DatetimeIndex
In [74]: index = pd.Index(dates)
In [75]: index
Out[75]: DatetimeIndex(['2012-05-01', '2012-05-02', '2012-05-03'], dtype='datetime64[ns]', freq=None)
In practice this becomes very cumbersome because we often need a very long
index with a large number of timestamps. If we need timestamps on a regular
frequency, we can use the date_range()
and bdate_range()
functions
to create a DatetimeIndex
. The default frequency for date_range
is a
calendar day while the default for bdate_range
is a business day:
In [76]: start = datetime.datetime(2011, 1, 1)
In [77]: end = datetime.datetime(2012, 1, 1)
In [78]: index = pd.date_range(start, end)
In [79]: index
Out[79]:
DatetimeIndex(['2011-01-01', '2011-01-02', '2011-01-03', '2011-01-04',
'2011-01-05', '2011-01-06', '2011-01-07', '2011-01-08',
'2011-01-09', '2011-01-10',
...
'2011-12-23', '2011-12-24', '2011-12-25', '2011-12-26',
'2011-12-27', '2011-12-28', '2011-12-29', '2011-12-30',
'2011-12-31', '2012-01-01'],
dtype='datetime64[ns]', length=366, freq='D')
In [80]: index = pd.bdate_range(start, end)
In [81]: index
Out[81]:
DatetimeIndex(['2011-01-03', '2011-01-04', '2011-01-05', '2011-01-06',
'2011-01-07', '2011-01-10', '2011-01-11', '2011-01-12',
'2011-01-13', '2011-01-14',
...
'2011-12-19', '2011-12-20', '2011-12-21', '2011-12-22',
'2011-12-23', '2011-12-26', '2011-12-27', '2011-12-28',
'2011-12-29', '2011-12-30'],
dtype='datetime64[ns]', length=260, freq='B')
Convenience functions like date_range
and bdate_range
can utilize a
variety of frequency aliases:
In [82]: pd.date_range(start, periods=1000, freq="M")
Out[82]:
DatetimeIndex(['2011-01-31', '2011-02-28', '2011-03-31', '2011-04-30',
'2011-05-31', '2011-06-30', '2011-07-31', '2011-08-31',
'2011-09-30', '2011-10-31',
...
'2093-07-31', '2093-08-31', '2093-09-30', '2093-10-31',
'2093-11-30', '2093-12-31', '2094-01-31', '2094-02-28',
'2094-03-31', '2094-04-30'],
dtype='datetime64[ns]', length=1000, freq='M')
In [83]: pd.bdate_range(start, periods=250, freq="BQS")
Out[83]:
DatetimeIndex(['2011-01-03', '2011-04-01', '2011-07-01', '2011-10-03',
'2012-01-02', '2012-04-02', '2012-07-02', '2012-10-01',
'2013-01-01', '2013-04-01',
...
'2071-01-01', '2071-04-01', '2071-07-01', '2071-10-01',
'2072-01-01', '2072-04-01', '2072-07-01', '2072-10-03',
'2073-01-02', '2073-04-03'],
dtype='datetime64[ns]', length=250, freq='BQS-JAN')
date_range
and bdate_range
make it easy to generate a range of dates
using various combinations of parameters like start
, end
, periods
,
and freq
. The start and end dates are strictly inclusive, so dates outside
of those specified will not be generated:
In [84]: pd.date_range(start, end, freq="BM")
Out[84]:
DatetimeIndex(['2011-01-31', '2011-02-28', '2011-03-31', '2011-04-29',
'2011-05-31', '2011-06-30', '2011-07-29', '2011-08-31',
'2011-09-30', '2011-10-31', '2011-11-30', '2011-12-30'],
dtype='datetime64[ns]', freq='BM')
In [85]: pd.date_range(start, end, freq="W")
Out[85]:
DatetimeIndex(['2011-01-02', '2011-01-09', '2011-01-16', '2011-01-23',
'2011-01-30', '2011-02-06', '2011-02-13', '2011-02-20',
'2011-02-27', '2011-03-06', '2011-03-13', '2011-03-20',
'2011-03-27', '2011-04-03', '2011-04-10', '2011-04-17',
'2011-04-24', '2011-05-01', '2011-05-08', '2011-05-15',
'2011-05-22', '2011-05-29', '2011-06-05', '2011-06-12',
'2011-06-19', '2011-06-26', '2011-07-03', '2011-07-10',
'2011-07-17', '2011-07-24', '2011-07-31', '2011-08-07',
'2011-08-14', '2011-08-21', '2011-08-28', '2011-09-04',
'2011-09-11', '2011-09-18', '2011-09-25', '2011-10-02',
'2011-10-09', '2011-10-16', '2011-10-23', '2011-10-30',
'2011-11-06', '2011-11-13', '2011-11-20', '2011-11-27',
'2011-12-04', '2011-12-11', '2011-12-18', '2011-12-25',
'2012-01-01'],
dtype='datetime64[ns]', freq='W-SUN')
In [86]: pd.bdate_range(end=end, periods=20)
Out[86]:
DatetimeIndex(['2011-12-05', '2011-12-06', '2011-12-07', '2011-12-08',
'2011-12-09', '2011-12-12', '2011-12-13', '2011-12-14',
'2011-12-15', '2011-12-16', '2011-12-19', '2011-12-20',
'2011-12-21', '2011-12-22', '2011-12-23', '2011-12-26',
'2011-12-27', '2011-12-28', '2011-12-29', '2011-12-30'],
dtype='datetime64[ns]', freq='B')
In [87]: pd.bdate_range(start=start, periods=20)
Out[87]:
DatetimeIndex(['2011-01-03', '2011-01-04', '2011-01-05', '2011-01-06',
'2011-01-07', '2011-01-10', '2011-01-11', '2011-01-12',
'2011-01-13', '2011-01-14', '2011-01-17', '2011-01-18',
'2011-01-19', '2011-01-20', '2011-01-21', '2011-01-24',
'2011-01-25', '2011-01-26', '2011-01-27', '2011-01-28'],
dtype='datetime64[ns]', freq='B')
Specifying start
, end
, and periods
will generate a range of evenly spaced
dates from start
to end
inclusively, with periods
number of elements in the
resulting DatetimeIndex
:
In [88]: pd.date_range("2018-01-01", "2018-01-05", periods=5)
Out[88]:
DatetimeIndex(['2018-01-01', '2018-01-02', '2018-01-03', '2018-01-04',
'2018-01-05'],
dtype='datetime64[ns]', freq=None)
In [89]: pd.date_range("2018-01-01", "2018-01-05", periods=10)
Out[89]:
DatetimeIndex(['2018-01-01 00:00:00', '2018-01-01 10:40:00',
'2018-01-01 21:20:00', '2018-01-02 08:00:00',
'2018-01-02 18:40:00', '2018-01-03 05:20:00',
'2018-01-03 16:00:00', '2018-01-04 02:40:00',
'2018-01-04 13:20:00', '2018-01-05 00:00:00'],
dtype='datetime64[ns]', freq=None)
Custom frequency ranges#
bdate_range
can also generate a range of custom frequency dates by using
the weekmask
and holidays
parameters. These parameters will only be
used if a custom frequency string is passed.
In [90]: weekmask = "Mon Wed Fri"
In [91]: holidays = [datetime.datetime(2011, 1, 5), datetime.datetime(2011, 3, 14)]
In [92]: pd.bdate_range(start, end, freq="C", weekmask=weekmask, holidays=holidays)
Out[92]:
DatetimeIndex(['2011-01-03', '2011-01-07', '2011-01-10', '2011-01-12',
'2011-01-14', '2011-01-17', '2011-01-19', '2011-01-21',
'2011-01-24', '2011-01-26',
...
'2011-12-09', '2011-12-12', '2011-12-14', '2011-12-16',
'2011-12-19', '2011-12-21', '2011-12-23', '2011-12-26',
'2011-12-28', '2011-12-30'],
dtype='datetime64[ns]', length=154, freq='C')
In [93]: pd.bdate_range(start, end, freq="CBMS", weekmask=weekmask)
Out[93]:
DatetimeIndex(['2011-01-03', '2011-02-02', '2011-03-02', '2011-04-01',
'2011-05-02', '2011-06-01', '2011-07-01', '2011-08-01',
'2011-09-02', '2011-10-03', '2011-11-02', '2011-12-02'],
dtype='datetime64[ns]', freq='CBMS')
See also
Timestamp limitations#
The limits of timestamp representation depend on the chosen resolution. For nanosecond resolution, the time span that can be represented using a 64-bit integer is limited to approximately 584 years:
In [94]: pd.Timestamp.min
Out[94]: Timestamp('1677-09-21 00:12:43.145224193')
In [95]: pd.Timestamp.max
Out[95]: Timestamp('2262-04-11 23:47:16.854775807')
When choosing second-resolution, the available range grows to +/- 2.9e11 years
.
Different resolutions can be converted to each other through as_unit
.
See also
Indexing#
One of the main uses for DatetimeIndex
is as an index for pandas objects.
The DatetimeIndex
class contains many time series related optimizations:
A large range of dates for various offsets are pre-computed and cached under the hood in order to make generating subsequent date ranges very fast (just have to grab a slice).
Fast shifting using the
shift
method on pandas objects.Unioning of overlapping
DatetimeIndex
objects with the same frequency is very fast (important for fast data alignment).Quick access to date fields via properties such as
year
,month
, etc.Regularization functions like
snap
and very fastasof
logic.
DatetimeIndex
objects have all the basic functionality of regular Index
objects, and a smorgasbord of advanced time series specific methods for easy
frequency processing.
See also
Note
While pandas does not force you to have a sorted date index, some of these methods may have unexpected or incorrect behavior if the dates are unsorted.
DatetimeIndex
can be used like a regular index and offers all of its
intelligent functionality like selection, slicing, etc.
In [96]: rng = pd.date_range(start, end, freq="BM")
In [97]: ts = pd.Series(np.random.randn(len(rng)), index=rng)
In [98]: ts.index
Out[98]:
DatetimeIndex(['2011-01-31', '2011-02-28', '2011-03-31', '2011-04-29',
'2011-05-31', '2011-06-30', '2011-07-29', '2011-08-31',
'2011-09-30', '2011-10-31', '2011-11-30', '2011-12-30'],
dtype='datetime64[ns]', freq='BM')
In [99]: ts[:5].index
Out[99]:
DatetimeIndex(['2011-01-31', '2011-02-28', '2011-03-31', '2011-04-29',
'2011-05-31'],
dtype='datetime64[ns]', freq='BM')
In [100]: ts[::2].index
Out[100]:
DatetimeIndex(['2011-01-31', '2011-03-31', '2011-05-31', '2011-07-29',
'2011-09-30', '2011-11-30'],
dtype='datetime64[ns]', freq='2BM')
Partial string indexing#
Dates and strings that parse to timestamps can be passed as indexing parameters:
In [101]: ts["1/31/2011"]
Out[101]: 0.11920871129693428
In [102]: ts[datetime.datetime(2011, 12, 25):]
Out[102]:
2011-12-30 0.56702
Freq: BM, dtype: float64
In [103]: ts["10/31/2011":"12/31/2011"]
Out[103]:
2011-10-31 0.271860
2011-11-30 -0.424972
2011-12-30 0.567020
Freq: BM, dtype: float64
To provide convenience for accessing longer time series, you can also pass in the year or year and month as strings:
In [104]: ts["2011"]
Out[104]:
2011-01-31 0.119209
2011-02-28 -1.044236
2011-03-31 -0.861849
2011-04-29 -2.104569
2011-05-31 -0.494929
2011-06-30 1.071804
2011-07-29 0.721555
2011-08-31 -0.706771
2011-09-30 -1.039575
2011-10-31 0.271860
2011-11-30 -0.424972
2011-12-30 0.567020
Freq: BM, dtype: float64
In [105]: ts["2011-6"]
Out[105]:
2011-06-30 1.071804
Freq: BM, dtype: float64
This type of slicing will work on a DataFrame
with a DatetimeIndex
as well. Since the
partial string selection is a form of label slicing, the endpoints will be included. This
would include matching times on an included date:
Warning
Indexing DataFrame
rows with a single string with getitem (e.g. frame[dtstring]
)
is deprecated starting with pandas 1.2.0 (given the ambiguity whether it is indexing
the rows or selecting a column) and will be removed in a future version. The equivalent
with .loc
(e.g. frame.loc[dtstring]
) is still supported.
In [106]: dft = pd.DataFrame(
.....: np.random.randn(100000, 1),
.....: columns=["A"],
.....: index=pd.date_range("20130101", periods=100000, freq="T"),
.....: )
.....:
In [107]: dft
Out[107]:
A
2013-01-01 00:00:00 0.276232
2013-01-01 00:01:00 -1.087401
2013-01-01 00:02:00 -0.673690
2013-01-01 00:03:00 0.113648
2013-01-01 00:04:00 -1.478427
... ...
2013-03-11 10:35:00 -0.747967
2013-03-11 10:36:00 -0.034523
2013-03-11 10:37:00 -0.201754
2013-03-11 10:38:00 -1.509067
2013-03-11 10:39:00 -1.693043
[100000 rows x 1 columns]
In [108]: dft.loc["2013"]
Out[108]:
A
2013-01-01 00:00:00 0.276232
2013-01-01 00:01:00 -1.087401
2013-01-01 00:02:00 -0.673690
2013-01-01 00:03:00 0.113648
2013-01-01 00:04:00 -1.478427
... ...
2013-03-11 10:35:00 -0.747967
2013-03-11 10:36:00 -0.034523
2013-03-11 10:37:00 -0.201754
2013-03-11 10:38:00 -1.509067
2013-03-11 10:39:00 -1.693043
[100000 rows x 1 columns]
This starts on the very first time in the month, and includes the last date and time for the month:
In [109]: dft["2013-1":"2013-2"]
Out[109]:
A
2013-01-01 00:00:00 0.276232
2013-01-01 00:01:00 -1.087401
2013-01-01 00:02:00 -0.673690
2013-01-01 00:03:00 0.113648
2013-01-01 00:04:00 -1.478427
... ...
2013-02-28 23:55:00 0.850929
2013-02-28 23:56:00 0.976712
2013-02-28 23:57:00 -2.693884
2013-02-28 23:58:00 -1.575535
2013-02-28 23:59:00 -1.573517
[84960 rows x 1 columns]
This specifies a stop time that includes all of the times on the last day:
In [110]: dft["2013-1":"2013-2-28"]
Out[110]:
A
2013-01-01 00:00:00 0.276232
2013-01-01 00:01:00 -1.087401
2013-01-01 00:02:00 -0.673690
2013-01-01 00:03:00 0.113648
2013-01-01 00:04:00 -1.478427
... ...
2013-02-28 23:55:00 0.850929
2013-02-28 23:56:00 0.976712
2013-02-28 23:57:00 -2.693884
2013-02-28 23:58:00 -1.575535
2013-02-28 23:59:00 -1.573517
[84960 rows x 1 columns]
This specifies an exact stop time (and is not the same as the above):
In [111]: dft["2013-1":"2013-2-28 00:00:00"]
Out[111]:
A
2013-01-01 00:00:00 0.276232
2013-01-01 00:01:00 -1.087401
2013-01-01 00:02:00 -0.673690
2013-01-01 00:03:00 0.113648
2013-01-01 00:04:00 -1.478427
... ...
2013-02-27 23:56:00 1.197749
2013-02-27 23:57:00 0.720521
2013-02-27 23:58:00 -0.072718
2013-02-27 23:59:00 -0.681192
2013-02-28 00:00:00 -0.557501
[83521 rows x 1 columns]
We are stopping on the included end-point as it is part of the index:
In [112]: dft["2013-1-15":"2013-1-15 12:30:00"]
Out[112]:
A
2013-01-15 00:00:00 -0.984810
2013-01-15 00:01:00 0.941451
2013-01-15 00:02:00 1.559365
2013-01-15 00:03:00 1.034374
2013-01-15 00:04:00 -1.480656
... ...
2013-01-15 12:26:00 0.371454
2013-01-15 12:27:00 -0.930806
2013-01-15 12:28:00 -0.069177
2013-01-15 12:29:00 0.066510
2013-01-15 12:30:00 -0.003945
[751 rows x 1 columns]
DatetimeIndex
partial string indexing also works on a DataFrame
with a MultiIndex
:
In [113]: dft2 = pd.DataFrame(
.....: np.random.randn(20, 1),
.....: columns=["A"],
.....: index=pd.MultiIndex.from_product(
.....: [pd.date_range("20130101", periods=10, freq="12H"), ["a", "b"]]
.....: ),
.....: )
.....:
In [114]: dft2
Out[114]:
A
2013-01-01 00:00:00 a -0.298694
b 0.823553
2013-01-01 12:00:00 a 0.943285
b -1.479399
2013-01-02 00:00:00 a -1.643342
... ...
2013-01-04 12:00:00 b 0.069036
2013-01-05 00:00:00 a 0.122297
b 1.422060
2013-01-05 12:00:00 a 0.370079
b 1.016331
[20 rows x 1 columns]
In [115]: dft2.loc["2013-01-05"]
Out[115]:
A
2013-01-05 00:00:00 a 0.122297
b 1.422060
2013-01-05 12:00:00 a 0.370079
b 1.016331
In [116]: idx = pd.IndexSlice
In [117]: dft2 = dft2.swaplevel(0, 1).sort_index()
In [118]: dft2.loc[idx[:, "2013-01-05"], :]
Out[118]:
A
a 2013-01-05 00:00:00 0.122297
2013-01-05 12:00:00 0.370079
b 2013-01-05 00:00:00 1.422060
2013-01-05 12:00:00 1.016331
Slicing with string indexing also honors UTC offset.
In [119]: df = pd.DataFrame([0], index=pd.DatetimeIndex(["2019-01-01"], tz="US/Pacific"))
In [120]: df
Out[120]:
0
2019-01-01 00:00:00-08:00 0
In [121]: df["2019-01-01 12:00:00+04:00":"2019-01-01 13:00:00+04:00"]
Out[121]:
0
2019-01-01 00:00:00-08:00 0
Slice vs. exact match#
The same string used as an indexing parameter can be treated either as a slice or as an exact match depending on the resolution of the index. If the string is less accurate than the index, it will be treated as a slice, otherwise as an exact match.
Consider a Series
object with a minute resolution index:
In [122]: series_minute = pd.Series(
.....: [1, 2, 3],
.....: pd.DatetimeIndex(
.....: ["2011-12-31 23:59:00", "2012-01-01 00:00:00", "2012-01-01 00:02:00"]
.....: ),
.....: )
.....:
In [123]: series_minute.index.resolution
Out[123]: 'minute'
A timestamp string less accurate than a minute gives a Series
object.
In [124]: series_minute["2011-12-31 23"]
Out[124]:
2011-12-31 23:59:00 1
dtype: int64
A timestamp string with minute resolution (or more accurate), gives a scalar instead, i.e. it is not casted to a slice.
In [125]: series_minute["2011-12-31 23:59"]
Out[125]: 1
In [126]: series_minute["2011-12-31 23:59:00"]
Out[126]: 1
If index resolution is second, then the minute-accurate timestamp gives a
Series
.
In [127]: series_second = pd.Series(
.....: [1, 2, 3],
.....: pd.DatetimeIndex(
.....: ["2011-12-31 23:59:59", "2012-01-01 00:00:00", "2012-01-01 00:00:01"]
.....: ),
.....: )
.....:
In [128]: series_second.index.resolution
Out[128]: 'second'
In [129]: series_second["2011-12-31 23:59"]
Out[129]:
2011-12-31 23:59:59 1
dtype: int64
If the timestamp string is treated as a slice, it can be used to index DataFrame
with .loc[]
as well.
In [130]: dft_minute = pd.DataFrame(
.....: {"a": [1, 2, 3], "b": [4, 5, 6]}, index=series_minute.index
.....: )
.....:
In [131]: dft_minute.loc["2011-12-31 23"]
Out[131]:
a b
2011-12-31 23:59:00 1 4
Warning
However, if the string is treated as an exact match, the selection in DataFrame
’s []
will be column-wise and not row-wise, see Indexing Basics. For example dft_minute['2011-12-31 23:59']
will raise KeyError
as '2012-12-31 23:59'
has the same resolution as the index and there is no column with such name:
To always have unambiguous selection, whether the row is treated as a slice or a single selection, use .loc
.
In [132]: dft_minute.loc["2011-12-31 23:59"]
Out[132]:
a 1
b 4
Name: 2011-12-31 23:59:00, dtype: int64
Note also that DatetimeIndex
resolution cannot be less precise than day.
In [133]: series_monthly = pd.Series(
.....: [1, 2, 3], pd.DatetimeIndex(["2011-12", "2012-01", "2012-02"])
.....: )
.....:
In [134]: series_monthly.index.resolution
Out[134]: 'day'
In [135]: series_monthly["2011-12"] # returns Series
Out[135]:
2011-12-01 1
dtype: int64
Exact indexing#
As discussed in previous section, indexing a DatetimeIndex
with a partial string depends on the “accuracy” of the period, in other words how specific the interval is in relation to the resolution of the index. In contrast, indexing with Timestamp
or datetime
objects is exact, because the objects have exact meaning. These also follow the semantics of including both endpoints.
These Timestamp
and datetime
objects have exact hours, minutes,
and seconds
, even though they were not explicitly specified (they are 0
).
In [136]: dft[datetime.datetime(2013, 1, 1): datetime.datetime(2013, 2, 28)]
Out[136]:
A
2013-01-01 00:00:00 0.276232
2013-01-01 00:01:00 -1.087401
2013-01-01 00:02:00 -0.673690
2013-01-01 00:03:00 0.113648
2013-01-01 00:04:00 -1.478427
... ...
2013-02-27 23:56:00 1.197749
2013-02-27 23:57:00 0.720521
2013-02-27 23:58:00 -0.072718
2013-02-27 23:59:00 -0.681192
2013-02-28 00:00:00 -0.557501
[83521 rows x 1 columns]
With no defaults.
In [137]: dft[
.....: datetime.datetime(2013, 1, 1, 10, 12, 0): datetime.datetime(
.....: 2013, 2, 28, 10, 12, 0
.....: )
.....: ]
.....:
Out[137]:
A
2013-01-01 10:12:00 0.565375
2013-01-01 10:13:00 0.068184
2013-01-01 10:14:00 0.788871
2013-01-01 10:15:00 -0.280343
2013-01-01 10:16:00 0.931536
... ...
2013-02-28 10:08:00 0.148098
2013-02-28 10:09:00 -0.388138
2013-02-28 10:10:00 0.139348
2013-02-28 10:11:00 0.085288
2013-02-28 10:12:00 0.950146
[83521 rows x 1 columns]
Truncating & fancy indexing#
A truncate()
convenience function is provided that is similar
to slicing. Note that truncate
assumes a 0 value for any unspecified date
component in a DatetimeIndex
in contrast to slicing which returns any
partially matching dates:
In [138]: rng2 = pd.date_range("2011-01-01", "2012-01-01", freq="W")
In [139]: ts2 = pd.Series(np.random.randn(len(rng2)), index=rng2)
In [140]: ts2.truncate(before="2011-11", after="2011-12")
Out[140]:
2011-11-06 0.437823
2011-11-13 -0.293083
2011-11-20 -0.059881
2011-11-27 1.252450
Freq: W-SUN, dtype: float64
In [141]: ts2["2011-11":"2011-12"]
Out[141]:
2011-11-06 0.437823
2011-11-13 -0.293083
2011-11-20 -0.059881
2011-11-27 1.252450
2011-12-04 0.046611
2011-12-11 0.059478
2011-12-18 -0.286539
2011-12-25 0.841669
Freq: W-SUN, dtype: float64
Even complicated fancy indexing that breaks the DatetimeIndex
frequency
regularity will result in a DatetimeIndex
, although frequency is lost:
In [142]: ts2.iloc[[0, 2, 6]].index
Out[142]: DatetimeIndex(['2011-01-02', '2011-01-16', '2011-02-13'], dtype='datetime64[ns]', freq=None)
Time/date components#
There are several time/date properties that one can access from Timestamp
or a collection of timestamps like a DatetimeIndex
.
Property |
Description |
---|---|
year |
The year of the datetime |
month |
The month of the datetime |
day |
The days of the datetime |
hour |
The hour of the datetime |
minute |
The minutes of the datetime |
second |
The seconds of the datetime |
microsecond |
The microseconds of the datetime |
nanosecond |
The nanoseconds of the datetime |
date |
Returns datetime.date (does not contain timezone information) |
time |
Returns datetime.time (does not contain timezone information) |
timetz |
Returns datetime.time as local time with timezone information |
dayofyear |
The ordinal day of year |
day_of_year |
The ordinal day of year |
weekofyear |
The week ordinal of the year |
week |
The week ordinal of the year |
dayofweek |
The number of the day of the week with Monday=0, Sunday=6 |
day_of_week |
The number of the day of the week with Monday=0, Sunday=6 |
weekday |
The number of the day of the week with Monday=0, Sunday=6 |
quarter |
Quarter of the date: Jan-Mar = 1, Apr-Jun = 2, etc. |
days_in_month |
The number of days in the month of the datetime |
is_month_start |
Logical indicating if first day of month (defined by frequency) |
is_month_end |
Logical indicating if last day of month (defined by frequency) |
is_quarter_start |
Logical indicating if first day of quarter (defined by frequency) |
is_quarter_end |
Logical indicating if last day of quarter (defined by frequency) |
is_year_start |
Logical indicating if first day of year (defined by frequency) |
is_year_end |
Logical indicating if last day of year (defined by frequency) |
is_leap_year |
Logical indicating if the date belongs to a leap year |
Furthermore, if you have a Series
with datetimelike values, then you can
access these properties via the .dt
accessor, as detailed in the section
on .dt accessors.
You may obtain the year, week and day components of the ISO year from the ISO 8601 standard:
In [143]: idx = pd.date_range(start="2019-12-29", freq="D", periods=4)
In [144]: idx.isocalendar()
Out[144]:
year week day
2019-12-29 2019 52 7
2019-12-30 2020 1 1
2019-12-31 2020 1 2
2020-01-01 2020 1 3
In [145]: idx.to_series().dt.isocalendar()
Out[145]:
year week day
2019-12-29 2019 52 7
2019-12-30 2020 1 1
2019-12-31 2020 1 2
2020-01-01 2020 1 3
DateOffset objects#
In the preceding examples, frequency strings (e.g. 'D'
) were used to specify
a frequency that defined:
how the date times in
DatetimeIndex
were spaced when usingdate_range()
the frequency of a
Period
orPeriodIndex
These frequency strings map to a DateOffset
object and its subclasses. A DateOffset
is similar to a Timedelta
that represents a duration of time but follows specific calendar duration rules.
For example, a Timedelta
day will always increment datetimes
by 24 hours, while a DateOffset
day
will increment datetimes
to the same time the next day whether a day represents 23, 24 or 25 hours due to daylight
savings time. However, all DateOffset
subclasses that are an hour or smaller
(Hour
, Minute
, Second
, Milli
, Micro
, Nano
) behave like
Timedelta
and respect absolute time.
The basic DateOffset
acts similar to dateutil.relativedelta
(relativedelta documentation)
that shifts a date time by the corresponding calendar duration specified. The
arithmetic operator (+
) can be used to perform the shift.
# This particular day contains a day light savings time transition
In [146]: ts = pd.Timestamp("2016-10-30 00:00:00", tz="Europe/Helsinki")
# Respects absolute time
In [147]: ts + pd.Timedelta(days=1)
Out[147]: Timestamp('2016-10-30 23:00:00+0200', tz='Europe/Helsinki')
# Respects calendar time
In [148]: ts + pd.DateOffset(days=1)
Out[148]: Timestamp('2016-10-31 00:00:00+0200', tz='Europe/Helsinki')
In [149]: friday = pd.Timestamp("2018-01-05")
In [150]: friday.day_name()
Out[150]: 'Friday'
# Add 2 business days (Friday --> Tuesday)
In [151]: two_business_days = 2 * pd.offsets.BDay()
In [152]: friday + two_business_days
Out[152]: Timestamp('2018-01-09 00:00:00')
In [153]: (friday + two_business_days).day_name()
Out[153]: 'Tuesday'
Most DateOffsets
have associated frequencies strings, or offset aliases, that can be passed
into freq
keyword arguments. The available date offsets and associated frequency strings can be found below:
Date Offset |
Frequency String |
Description |
---|---|---|
None |
Generic offset class, defaults to absolute 24 hours |
|
|
|
business day (weekday) |
|
custom business day |
|
|
one week, optionally anchored on a day of the week |
|
|
the x-th day of the y-th week of each month |
|
|
the x-th day of the last week of each month |
|
|
calendar month end |
|
|
calendar month begin |
|
|
business month end |
|
|
business month begin |
|
|
custom business month end |
|
|
custom business month begin |
|
|
15th (or other day_of_month) and calendar month end |
|
|
15th (or other day_of_month) and calendar month begin |
|
|
calendar quarter end |
|
|
calendar quarter begin |
|
|
business quarter end |
|
|
business quarter begin |
|
|
retail (aka 52-53 week) quarter |
|
|
calendar year end |
|
|
calendar year begin |
|
|
business year end |
|
|
business year begin |
|
|
retail (aka 52-53 week) year |
|
None |
Easter holiday |
|
|
business hour |
|
|
custom business hour |
|
|
one absolute day |
|
|
one hour |
|
|
one minute |
|
|
one second |
|
|
one millisecond |
|
|
one microsecond |
|
|
one nanosecond |
DateOffsets
additionally have rollforward()
and rollback()
methods for moving a date forward or backward respectively to a valid offset
date relative to the offset. For example, business offsets will roll dates
that land on the weekends (Saturday and Sunday) forward to Monday since
business offsets operate on the weekdays.
In [154]: ts = pd.Timestamp("2018-01-06 00:00:00")
In [155]: ts.day_name()
Out[155]: 'Saturday'
# BusinessHour's valid offset dates are Monday through Friday
In [156]: offset = pd.offsets.BusinessHour(start="09:00")
# Bring the date to the closest offset date (Monday)
In [157]: offset.rollforward(ts)
Out[157]: Timestamp('2018-01-08 09:00:00')
# Date is brought to the closest offset date first and then the hour is added
In [158]: ts + offset
Out[158]: Timestamp('2018-01-08 10:00:00')
These operations preserve time (hour, minute, etc) information by default.
To reset time to midnight, use normalize()
before or after applying
the operation (depending on whether you want the time information included
in the operation).
In [159]: ts = pd.Timestamp("2014-01-01 09:00")
In [160]: day = pd.offsets.Day()
In [161]: day + ts
Out[161]: Timestamp('2014-01-02 09:00:00')
In [162]: (day + ts).normalize()
Out[162]: Timestamp('2014-01-02 00:00:00')
In [163]: ts = pd.Timestamp("2014-01-01 22:00")
In [164]: hour = pd.offsets.Hour()
In [165]: hour + ts
Out[165]: Timestamp('2014-01-01 23:00:00')
In [166]: (hour + ts).normalize()
Out[166]: Timestamp('2014-01-01 00:00:00')
In [167]: (hour + pd.Timestamp("2014-01-01 23:30")).normalize()
Out[167]: Timestamp('2014-01-02 00:00:00')
Parametric offsets#
Some of the offsets can be “parameterized” when created to result in different
behaviors. For example, the Week
offset for generating weekly data accepts a
weekday
parameter which results in the generated dates always lying on a
particular day of the week:
In [168]: d = datetime.datetime(2008, 8, 18, 9, 0)
In [169]: d
Out[169]: datetime.datetime(2008, 8, 18, 9, 0)
In [170]: d + pd.offsets.Week()
Out[170]: Timestamp('2008-08-25 09:00:00')
In [171]: d + pd.offsets.Week(weekday=4)
Out[171]: Timestamp('2008-08-22 09:00:00')
In [172]: (d + pd.offsets.Week(weekday=4)).weekday()
Out[172]: 4
In [173]: d - pd.offsets.Week()
Out[173]: Timestamp('2008-08-11 09:00:00')
The normalize
option will be effective for addition and subtraction.
In [174]: d + pd.offsets.Week(normalize=True)
Out[174]: Timestamp('2008-08-25 00:00:00')
In [175]: d - pd.offsets.Week(normalize=True)
Out[175]: Timestamp('2008-08-11 00:00:00')
Another example is parameterizing YearEnd
with the specific ending month:
In [176]: d + pd.offsets.YearEnd()
Out[176]: Timestamp('2008-12-31 09:00:00')
In [177]: d + pd.offsets.YearEnd(month=6)
Out[177]: Timestamp('2009-06-30 09:00:00')
Using offsets with Series
/ DatetimeIndex
#
Offsets can be used with either a Series
or DatetimeIndex
to
apply the offset to each element.
In [178]: rng = pd.date_range("2012-01-01", "2012-01-03")
In [179]: s = pd.Series(rng)
In [180]: rng
Out[180]: DatetimeIndex(['2012-01-01', '2012-01-02', '2012-01-03'], dtype='datetime64[ns]', freq='D')
In [181]: rng + pd.DateOffset(months=2)
Out[181]: DatetimeIndex(['2012-03-01', '2012-03-02', '2012-03-03'], dtype='datetime64[ns]', freq=None)
In [182]: s + pd.DateOffset(months=2)
Out[182]:
0 2012-03-01
1 2012-03-02
2 2012-03-03
dtype: datetime64[ns]
In [183]: s - pd.DateOffset(months=2)
Out[183]:
0 2011-11-01
1 2011-11-02
2 2011-11-03
dtype: datetime64[ns]
If the offset class maps directly to a Timedelta
(Day
, Hour
,
Minute
, Second
, Micro
, Milli
, Nano
) it can be
used exactly like a Timedelta
- see the
Timedelta section for more examples.
In [184]: s - pd.offsets.Day(2)
Out[184]:
0 2011-12-30
1 2011-12-31
2 2012-01-01
dtype: datetime64[ns]
In [185]: td = s - pd.Series(pd.date_range("2011-12-29", "2011-12-31"))
In [186]: td
Out[186]:
0 3 days
1 3 days
2 3 days
dtype: timedelta64[ns]
In [187]: td + pd.offsets.Minute(15)
Out[187]:
0 3 days 00:15:00
1 3 days 00:15:00
2 3 days 00:15:00
dtype: timedelta64[ns]
Note that some offsets (such as BQuarterEnd
) do not have a
vectorized implementation. They can still be used but may
calculate significantly slower and will show a PerformanceWarning
In [188]: rng + pd.offsets.BQuarterEnd()
Out[188]: DatetimeIndex(['2012-03-30', '2012-03-30', '2012-03-30'], dtype='datetime64[ns]', freq=None)
Custom business days#
The CDay
or CustomBusinessDay
class provides a parametric
BusinessDay
class which can be used to create customized business day
calendars which account for local holidays and local weekend conventions.
As an interesting example, let’s look at Egypt where a Friday-Saturday weekend is observed.
In [189]: weekmask_egypt = "Sun Mon Tue Wed Thu"
# They also observe International Workers' Day so let's
# add that for a couple of years
In [190]: holidays = [
.....: "2012-05-01",
.....: datetime.datetime(2013, 5, 1),
.....: np.datetime64("2014-05-01"),
.....: ]
.....:
In [191]: bday_egypt = pd.offsets.CustomBusinessDay(
.....: holidays=holidays,
.....: weekmask=weekmask_egypt,
.....: )
.....:
In [192]: dt = datetime.datetime(2013, 4, 30)
In [193]: dt + 2 * bday_egypt
Out[193]: Timestamp('2013-05-05 00:00:00')
Let’s map to the weekday names:
In [194]: dts = pd.date_range(dt, periods=5, freq=bday_egypt)
In [195]: pd.Series(dts.weekday, dts).map(pd.Series("Mon Tue Wed Thu Fri Sat Sun".split()))
Out[195]:
2013-04-30 Tue
2013-05-02 Thu
2013-05-05 Sun
2013-05-06 Mon
2013-05-07 Tue
Freq: C, dtype: object
Holiday calendars can be used to provide the list of holidays. See the holiday calendar section for more information.
In [196]: from pandas.tseries.holiday import USFederalHolidayCalendar
In [197]: bday_us = pd.offsets.CustomBusinessDay(calendar=USFederalHolidayCalendar())
# Friday before MLK Day
In [198]: dt = datetime.datetime(2014, 1, 17)
# Tuesday after MLK Day (Monday is skipped because it's a holiday)
In [199]: dt + bday_us
Out[199]: Timestamp('2014-01-21 00:00:00')
Monthly offsets that respect a certain holiday calendar can be defined in the usual way.
In [200]: bmth_us = pd.offsets.CustomBusinessMonthBegin(calendar=USFederalHolidayCalendar())
# Skip new years
In [201]: dt = datetime.datetime(2013, 12, 17)
In [202]: dt + bmth_us
Out[202]: Timestamp('2014-01-02 00:00:00')
# Define date index with custom offset
In [203]: pd.date_range(start="20100101", end="20120101", freq=bmth_us)
Out[203]:
DatetimeIndex(['2010-01-04', '2010-02-01', '2010-03-01', '2010-04-01',
'2010-05-03', '2010-06-01', '2010-07-01', '2010-08-02',
'2010-09-01', '2010-10-01', '2010-11-01', '2010-12-01',
'2011-01-03', '2011-02-01', '2011-03-01', '2011-04-01',
'2011-05-02', '2011-06-01', '2011-07-01', '2011-08-01',
'2011-09-01', '2011-10-03', '2011-11-01', '2011-12-01'],
dtype='datetime64[ns]', freq='CBMS')
Note
The frequency string ‘C’ is used to indicate that a CustomBusinessDay DateOffset is used, it is important to note that since CustomBusinessDay is a parameterised type, instances of CustomBusinessDay may differ and this is not detectable from the ‘C’ frequency string. The user therefore needs to ensure that the ‘C’ frequency string is used consistently within the user’s application.
Business hour#
The BusinessHour
class provides a business hour representation on BusinessDay
,
allowing to use specific start and end times.
By default, BusinessHour
uses 9:00 - 17:00 as business hours.
Adding BusinessHour
will increment Timestamp
by hourly frequency.
If target Timestamp
is out of business hours, move to the next business hour
then increment it. If the result exceeds the business hours end, the remaining
hours are added to the next business day.
In [204]: bh = pd.offsets.BusinessHour()
In [205]: bh
Out[205]: <BusinessHour: BH=09:00-17:00>
# 2014-08-01 is Friday
In [206]: pd.Timestamp("2014-08-01 10:00").weekday()
Out[206]: 4
In [207]: pd.Timestamp("2014-08-01 10:00") + bh
Out[207]: Timestamp('2014-08-01 11:00:00')
# Below example is the same as: pd.Timestamp('2014-08-01 09:00') + bh
In [208]: pd.Timestamp("2014-08-01 08:00") + bh
Out[208]: Timestamp('2014-08-01 10:00:00')
# If the results is on the end time, move to the next business day
In [209]: pd.Timestamp("2014-08-01 16:00") + bh
Out[209]: Timestamp('2014-08-04 09:00:00')
# Remainings are added to the next day
In [210]: pd.Timestamp("2014-08-01 16:30") + bh
Out[210]: Timestamp('2014-08-04 09:30:00')
# Adding 2 business hours
In [211]: pd.Timestamp("2014-08-01 10:00") + pd.offsets.BusinessHour(2)
Out[211]: Timestamp('2014-08-01 12:00:00')
# Subtracting 3 business hours
In [212]: pd.Timestamp("2014-08-01 10:00") + pd.offsets.BusinessHour(-3)
Out[212]: Timestamp('2014-07-31 15:00:00')
You can also specify start
and end
time by keywords. The argument must
be a str
with an hour:minute
representation or a datetime.time
instance. Specifying seconds, microseconds and nanoseconds as business hour
results in ValueError
.
In [213]: bh = pd.offsets.BusinessHour(start="11:00", end=datetime.time(20, 0))
In [214]: bh
Out[214]: <BusinessHour: BH=11:00-20:00>
In [215]: pd.Timestamp("2014-08-01 13:00") + bh
Out[215]: Timestamp('2014-08-01 14:00:00')
In [216]: pd.Timestamp("2014-08-01 09:00") + bh
Out[216]: Timestamp('2014-08-01 12:00:00')
In [217]: pd.Timestamp("2014-08-01 18:00") + bh
Out[217]: Timestamp('2014-08-01 19:00:00')
Passing start
time later than end
represents midnight business hour.
In this case, business hour exceeds midnight and overlap to the next day.
Valid business hours are distinguished by whether it started from valid BusinessDay
.
In [218]: bh = pd.offsets.BusinessHour(start="17:00", end="09:00")
In [219]: bh
Out[219]: <BusinessHour: BH=17:00-09:00>
In [220]: pd.Timestamp("2014-08-01 17:00") + bh
Out[220]: Timestamp('2014-08-01 18:00:00')
In [221]: pd.Timestamp("2014-08-01 23:00") + bh
Out[221]: Timestamp('2014-08-02 00:00:00')
# Although 2014-08-02 is Saturday,
# it is valid because it starts from 08-01 (Friday).
In [222]: pd.Timestamp("2014-08-02 04:00") + bh
Out[222]: Timestamp('2014-08-02 05:00:00')
# Although 2014-08-04 is Monday,
# it is out of business hours because it starts from 08-03 (Sunday).
In [223]: pd.Timestamp("2014-08-04 04:00") + bh
Out[223]: Timestamp('2014-08-04 18:00:00')
Applying BusinessHour.rollforward
and rollback
to out of business hours results in
the next business hour start or previous day’s end. Different from other offsets, BusinessHour.rollforward
may output different results from apply
by definition.
This is because one day’s business hour end is equal to next day’s business hour start. For example,
under the default business hours (9:00 - 17:00), there is no gap (0 minutes) between 2014-08-01 17:00
and
2014-08-04 09:00
.
# This adjusts a Timestamp to business hour edge
In [224]: pd.offsets.BusinessHour().rollback(pd.Timestamp("2014-08-02 15:00"))
Out[224]: Timestamp('2014-08-01 17:00:00')
In [225]: pd.offsets.BusinessHour().rollforward(pd.Timestamp("2014-08-02 15:00"))
Out[225]: Timestamp('2014-08-04 09:00:00')
# It is the same as BusinessHour() + pd.Timestamp('2014-08-01 17:00').
# And it is the same as BusinessHour() + pd.Timestamp('2014-08-04 09:00')
In [226]: pd.offsets.BusinessHour() + pd.Timestamp("2014-08-02 15:00")
Out[226]: Timestamp('2014-08-04 10:00:00')
# BusinessDay results (for reference)
In [227]: pd.offsets.BusinessHour().rollforward(pd.Timestamp("2014-08-02"))
Out[227]: Timestamp('2014-08-04 09:00:00')
# It is the same as BusinessDay() + pd.Timestamp('2014-08-01')
# The result is the same as rollworward because BusinessDay never overlap.
In [228]: pd.offsets.BusinessHour() + pd.Timestamp("2014-08-02")
Out[228]: Timestamp('2014-08-04 10:00:00')
BusinessHour
regards Saturday and Sunday as holidays. To use arbitrary
holidays, you can use CustomBusinessHour
offset, as explained in the
following subsection.
Custom business hour#
The CustomBusinessHour
is a mixture of BusinessHour
and CustomBusinessDay
which
allows you to specify arbitrary holidays. CustomBusinessHour
works as the same
as BusinessHour
except that it skips specified custom holidays.
In [229]: from pandas.tseries.holiday import USFederalHolidayCalendar
In [230]: bhour_us = pd.offsets.CustomBusinessHour(calendar=USFederalHolidayCalendar())
# Friday before MLK Day
In [231]: dt = datetime.datetime(2014, 1, 17, 15)
In [232]: dt + bhour_us
Out[232]: Timestamp('2014-01-17 16:00:00')
# Tuesday after MLK Day (Monday is skipped because it's a holiday)
In [233]: dt + bhour_us * 2
Out[233]: Timestamp('2014-01-21 09:00:00')
You can use keyword arguments supported by either BusinessHour
and CustomBusinessDay
.
In [234]: bhour_mon = pd.offsets.CustomBusinessHour(start="10:00", weekmask="Tue Wed Thu Fri")
# Monday is skipped because it's a holiday, business hour starts from 10:00
In [235]: dt + bhour_mon * 2
Out[235]: Timestamp('2014-01-21 10:00:00')
Offset aliases#
A number of string aliases are given to useful common time series frequencies. We will refer to these aliases as offset aliases.
Alias |
Description |
---|---|
B |
business day frequency |
C |
custom business day frequency |
D |
calendar day frequency |
W |
weekly frequency |
M |
month end frequency |
SM |
semi-month end frequency (15th and end of month) |
BM |
business month end frequency |
CBM |
custom business month end frequency |
MS |
month start frequency |
SMS |
semi-month start frequency (1st and 15th) |
BMS |
business month start frequency |
CBMS |
custom business month start frequency |
Q |
quarter end frequency |
BQ |
business quarter end frequency |
QS |
quarter start frequency |
BQS |
business quarter start frequency |
A, Y |
year end frequency |
BA, BY |
business year end frequency |
AS, YS |
year start frequency |
BAS, BYS |
business year start frequency |
BH |
business hour frequency |
H |
hourly frequency |
T, min |
minutely frequency |
S |
secondly frequency |
L, ms |
milliseconds |
U, us |
microseconds |
N |
nanoseconds |
Note
When using the offset aliases above, it should be noted that functions such as
date_range()
,bdate_range()
, will only return timestamps that are in the interval defined bystart_date
andend_date
. If thestart_date
does not correspond to the frequency, the returned timestamps will start at the next valid timestamp, same forend_date
, the returned timestamps will stop at the previous valid timestamp.
For example, for the offset MS
, if the start_date
is not the first
of the month, the returned timestamps will start with the first day of the
next month. If end_date
is not the first day of a month, the last
returned timestamp will be the first day of the corresponding month.
In [236]: dates_lst_1 = pd.date_range("2020-01-06", "2020-04-03", freq="MS")
In [237]: dates_lst_1
Out[237]: DatetimeIndex(['2020-02-01', '2020-03-01', '2020-04-01'], dtype='datetime64[ns]', freq='MS')
In [238]: dates_lst_2 = pd.date_range("2020-01-01", "2020-04-01", freq="MS")
In [239]: dates_lst_2
Out[239]: DatetimeIndex(['2020-01-01', '2020-02-01', '2020-03-01', '2020-04-01'], dtype='datetime64[ns]', freq='MS')
We can see in the above example date_range()
and
bdate_range()
will only return the valid timestamps between the
start_date
and end_date
. If these are not valid timestamps for the
given frequency it will roll to the next value for start_date
(respectively previous for the end_date
)
Period aliases#
A number of string aliases are given to useful common time series frequencies. We will refer to these aliases as period aliases.
Alias |
Description |
---|---|
B |
business day frequency |
D |
calendar day frequency |
W |
weekly frequency |
M |
monthly frequency |
Q |
quarterly frequency |
A, Y |
yearly frequency |
H |
hourly frequency |
T, min |
minutely frequency |
S |
secondly frequency |
L, ms |
milliseconds |
U, us |
microseconds |
N |
nanoseconds |
Combining aliases#
As we have seen previously, the alias and the offset instance are fungible in most functions:
In [240]: pd.date_range(start, periods=5, freq="B")
Out[240]:
DatetimeIndex(['2011-01-03', '2011-01-04', '2011-01-05', '2011-01-06',
'2011-01-07'],
dtype='datetime64[ns]', freq='B')
In [241]: pd.date_range(start, periods=5, freq=pd.offsets.BDay())
Out[241]:
DatetimeIndex(['2011-01-03', '2011-01-04', '2011-01-05', '2011-01-06',
'2011-01-07'],
dtype='datetime64[ns]', freq='B')
You can combine together day and intraday offsets:
In [242]: pd.date_range(start, periods=10, freq="2h20min")
Out[242]:
DatetimeIndex(['2011-01-01 00:00:00', '2011-01-01 02:20:00',
'2011-01-01 04:40:00', '2011-01-01 07:00:00',
'2011-01-01 09:20:00', '2011-01-01 11:40:00',
'2011-01-01 14:00:00', '2011-01-01 16:20:00',
'2011-01-01 18:40:00', '2011-01-01 21:00:00'],
dtype='datetime64[ns]', freq='140T')
In [243]: pd.date_range(start, periods=10, freq="1D10U")
Out[243]:
DatetimeIndex([ '2011-01-01 00:00:00', '2011-01-02 00:00:00.000010',
'2011-01-03 00:00:00.000020', '2011-01-04 00:00:00.000030',
'2011-01-05 00:00:00.000040', '2011-01-06 00:00:00.000050',
'2011-01-07 00:00:00.000060', '2011-01-08 00:00:00.000070',
'2011-01-09 00:00:00.000080', '2011-01-10 00:00:00.000090'],
dtype='datetime64[ns]', freq='86400000010U')
Anchored offsets#
For some frequencies you can specify an anchoring suffix:
Alias |
Description |
---|---|
W-SUN |
weekly frequency (Sundays). Same as ‘W’ |
W-MON |
weekly frequency (Mondays) |
W-TUE |
weekly frequency (Tuesdays) |
W-WED |
weekly frequency (Wednesdays) |
W-THU |
weekly frequency (Thursdays) |
W-FRI |
weekly frequency (Fridays) |
W-SAT |
weekly frequency (Saturdays) |
(B)Q(S)-DEC |
quarterly frequency, year ends in December. Same as ‘Q’ |
(B)Q(S)-JAN |
quarterly frequency, year ends in January |
(B)Q(S)-FEB |
quarterly frequency, year ends in February |
(B)Q(S)-MAR |
quarterly frequency, year ends in March |
(B)Q(S)-APR |
quarterly frequency, year ends in April |
(B)Q(S)-MAY |
quarterly frequency, year ends in May |
(B)Q(S)-JUN |
quarterly frequency, year ends in June |
(B)Q(S)-JUL |
quarterly frequency, year ends in July |
(B)Q(S)-AUG |
quarterly frequency, year ends in August |
(B)Q(S)-SEP |
quarterly frequency, year ends in September |
(B)Q(S)-OCT |
quarterly frequency, year ends in October |
(B)Q(S)-NOV |
quarterly frequency, year ends in November |
(B)A(S)-DEC |
annual frequency, anchored end of December. Same as ‘A’ |
(B)A(S)-JAN |
annual frequency, anchored end of January |
(B)A(S)-FEB |
annual frequency, anchored end of February |
(B)A(S)-MAR |
annual frequency, anchored end of March |
(B)A(S)-APR |
annual frequency, anchored end of April |
(B)A(S)-MAY |
annual frequency, anchored end of May |
(B)A(S)-JUN |
annual frequency, anchored end of June |
(B)A(S)-JUL |
annual frequency, anchored end of July |
(B)A(S)-AUG |
annual frequency, anchored end of August |
(B)A(S)-SEP |
annual frequency, anchored end of September |
(B)A(S)-OCT |
annual frequency, anchored end of October |
(B)A(S)-NOV |
annual frequency, anchored end of November |
These can be used as arguments to date_range
, bdate_range
, constructors
for DatetimeIndex
, as well as various other timeseries-related functions
in pandas.
Anchored offset semantics#
For those offsets that are anchored to the start or end of specific
frequency (MonthEnd
, MonthBegin
, WeekEnd
, etc), the following
rules apply to rolling forward and backwards.
When n
is not 0, if the given date is not on an anchor point, it snapped to the next(previous)
anchor point, and moved |n|-1
additional steps forwards or backwards.
In [244]: pd.Timestamp("2014-01-02") + pd.offsets.MonthBegin(n=1)
Out[244]: Timestamp('2014-02-01 00:00:00')
In [245]: pd.Timestamp("2014-01-02") + pd.offsets.MonthEnd(n=1)
Out[245]: Timestamp('2014-01-31 00:00:00')
In [246]: pd.Timestamp("2014-01-02") - pd.offsets.MonthBegin(n=1)
Out[246]: Timestamp('2014-01-01 00:00:00')
In [247]: pd.Timestamp("2014-01-02") - pd.offsets.MonthEnd(n=1)
Out[247]: Timestamp('2013-12-31 00:00:00')
In [248]: pd.Timestamp("2014-01-02") + pd.offsets.MonthBegin(n=4)
Out[248]: Timestamp('2014-05-01 00:00:00')
In [249]: pd.Timestamp("2014-01-02") - pd.offsets.MonthBegin(n=4)
Out[249]: Timestamp('2013-10-01 00:00:00')
If the given date is on an anchor point, it is moved |n|
points forwards
or backwards.
In [250]: pd.Timestamp("2014-01-01") + pd.offsets.MonthBegin(n=1)
Out[250]: Timestamp('2014-02-01 00:00:00')
In [251]: pd.Timestamp("2014-01-31") + pd.offsets.MonthEnd(n=1)
Out[251]: Timestamp('2014-02-28 00:00:00')
In [252]: pd.Timestamp("2014-01-01") - pd.offsets.MonthBegin(n=1)
Out[252]: Timestamp('2013-12-01 00:00:00')
In [253]: pd.Timestamp("2014-01-31") - pd.offsets.MonthEnd(n=1)
Out[253]: Timestamp('2013-12-31 00:00:00')
In [254]: pd.Timestamp("2014-01-01") + pd.offsets.MonthBegin(n=4)
Out[254]: Timestamp('2014-05-01 00:00:00')
In [255]: pd.Timestamp("2014-01-31") - pd.offsets.MonthBegin(n=4)
Out[255]: Timestamp('2013-10-01 00:00:00')
For the case when n=0
, the date is not moved if on an anchor point, otherwise
it is rolled forward to the next anchor point.
In [256]: pd.Timestamp("2014-01-02") + pd.offsets.MonthBegin(n=0)
Out[256]: Timestamp('2014-02-01 00:00:00')
In [257]: pd.Timestamp("2014-01-02") + pd.offsets.MonthEnd(n=0)
Out[257]: Timestamp('2014-01-31 00:00:00')
In [258]: pd.Timestamp("2014-01-01") + pd.offsets.MonthBegin(n=0)
Out[258]: Timestamp('2014-01-01 00:00:00')
In [259]: pd.Timestamp("2014-01-31") + pd.offsets.MonthEnd(n=0)
Out[259]: Timestamp('2014-01-31 00:00:00')
Holidays / holiday calendars#
Holidays and calendars provide a simple way to define holiday rules to be used
with CustomBusinessDay
or in other analysis that requires a predefined
set of holidays. The AbstractHolidayCalendar
class provides all the necessary
methods to return a list of holidays and only rules
need to be defined
in a specific holiday calendar class. Furthermore, the start_date
and end_date
class attributes determine over what date range holidays are generated. These
should be overwritten on the AbstractHolidayCalendar
class to have the range
apply to all calendar subclasses. USFederalHolidayCalendar
is the
only calendar that exists and primarily serves as an example for developing
other calendars.
For holidays that occur on fixed dates (e.g., US Memorial Day or July 4th) an observance rule determines when that holiday is observed if it falls on a weekend or some other non-observed day. Defined observance rules are:
Rule |
Description |
---|---|
nearest_workday |
move Saturday to Friday and Sunday to Monday |
sunday_to_monday |
move Sunday to following Monday |
next_monday_or_tuesday |
move Saturday to Monday and Sunday/Monday to Tuesday |
previous_friday |
move Saturday and Sunday to previous Friday” |
next_monday |
move Saturday and Sunday to following Monday |
An example of how holidays and holiday calendars are defined:
In [260]: from pandas.tseries.holiday import (
.....: Holiday,
.....: USMemorialDay,
.....: AbstractHolidayCalendar,
.....: nearest_workday,
.....: MO,
.....: )
.....:
In [261]: class ExampleCalendar(AbstractHolidayCalendar):
.....: rules = [
.....: USMemorialDay,
.....: Holiday("July 4th", month=7, day=4, observance=nearest_workday),
.....: Holiday(
.....: "Columbus Day",
.....: month=10,
.....: day=1,
.....: offset=pd.DateOffset(weekday=MO(2)),
.....: ),
.....: ]
.....:
In [262]: cal = ExampleCalendar()
In [263]: cal.holidays(datetime.datetime(2012, 1, 1), datetime.datetime(2012, 12, 31))
Out[263]: DatetimeIndex(['2012-05-28', '2012-07-04', '2012-10-08'], dtype='datetime64[ns]', freq=None)
- hint:
weekday=MO(2) is same as 2 * Week(weekday=2)
Using this calendar, creating an index or doing offset arithmetic skips weekends
and holidays (i.e., Memorial Day/July 4th). For example, the below defines
a custom business day offset using the ExampleCalendar
. Like any other offset,
it can be used to create a DatetimeIndex
or added to datetime
or Timestamp
objects.
In [264]: pd.date_range(
.....: start="7/1/2012", end="7/10/2012", freq=pd.offsets.CDay(calendar=cal)
.....: ).to_pydatetime()
.....:
Out[264]:
array([datetime.datetime(2012, 7, 2, 0, 0),
datetime.datetime(2012, 7, 3, 0, 0),
datetime.datetime(2012, 7, 5, 0, 0),
datetime.datetime(2012, 7, 6, 0, 0),
datetime.datetime(2012, 7, 9, 0, 0),
datetime.datetime(2012, 7, 10, 0, 0)], dtype=object)
In [265]: offset = pd.offsets.CustomBusinessDay(calendar=cal)
In [266]: datetime.datetime(2012, 5, 25) + offset
Out[266]: Timestamp('2012-05-29 00:00:00')
In [267]: datetime.datetime(2012, 7, 3) + offset
Out[267]: Timestamp('2012-07-05 00:00:00')
In [268]: datetime.datetime(2012, 7, 3) + 2 * offset
Out[268]: Timestamp('2012-07-06 00:00:00')
In [269]: datetime.datetime(2012, 7, 6) + offset
Out[269]: Timestamp('2012-07-09 00:00:00')
Ranges are defined by the start_date
and end_date
class attributes
of AbstractHolidayCalendar
. The defaults are shown below.
In [270]: AbstractHolidayCalendar.start_date
Out[270]: Timestamp('1970-01-01 00:00:00')
In [271]: AbstractHolidayCalendar.end_date
Out[271]: Timestamp('2200-12-31 00:00:00')
These dates can be overwritten by setting the attributes as datetime/Timestamp/string.
In [272]: AbstractHolidayCalendar.start_date = datetime.datetime(2012, 1, 1)
In [273]: AbstractHolidayCalendar.end_date = datetime.datetime(2012, 12, 31)
In [274]: cal.holidays()
Out[274]: DatetimeIndex(['2012-05-28', '2012-07-04', '2012-10-08'], dtype='datetime64[ns]', freq=None)
Every calendar class is accessible by name using the get_calendar
function
which returns a holiday class instance. Any imported calendar class will
automatically be available by this function. Also, HolidayCalendarFactory
provides an easy interface to create calendars that are combinations of calendars
or calendars with additional rules.
In [275]: from pandas.tseries.holiday import get_calendar, HolidayCalendarFactory, USLaborDay
In [276]: cal = get_calendar("ExampleCalendar")
In [277]: cal.rules
Out[277]:
[Holiday: Memorial Day (month=5, day=31, offset=<DateOffset: weekday=MO(-1)>),
Holiday: July 4th (month=7, day=4, observance=<function nearest_workday at 0x7f68a4553ac0>),
Holiday: Columbus Day (month=10, day=1, offset=<DateOffset: weekday=MO(+2)>)]
In [278]: new_cal = HolidayCalendarFactory("NewExampleCalendar", cal, USLaborDay)
In [279]: new_cal.rules
Out[279]:
[Holiday: Labor Day (month=9, day=1, offset=<DateOffset: weekday=MO(+1)>),
Holiday: Memorial Day (month=5, day=31, offset=<DateOffset: weekday=MO(-1)>),
Holiday: July 4th (month=7, day=4, observance=<function nearest_workday at 0x7f68a4553ac0>),
Holiday: Columbus Day (month=10, day=1, offset=<DateOffset: weekday=MO(+2)>)]
Resampling#
pandas has a simple, powerful, and efficient functionality for performing resampling operations during frequency conversion (e.g., converting secondly data into 5-minutely data). This is extremely common in, but not limited to, financial applications.
resample()
is a time-based groupby, followed by a reduction method
on each of its groups. See some cookbook examples for
some advanced strategies.
The resample()
method can be used directly from DataFrameGroupBy
objects,
see the groupby docs.
Basics#
In [291]: rng = pd.date_range("1/1/2012", periods=100, freq="S")
In [292]: ts = pd.Series(np.random.randint(0, 500, len(rng)), index=rng)
In [293]: ts.resample("5Min").sum()
Out[293]:
2012-01-01 25103
Freq: 5T, dtype: int64
The resample
function is very flexible and allows you to specify many
different parameters to control the frequency conversion and resampling
operation.
Any built-in method available via GroupBy is available as
a method of the returned object, including sum
, mean
, std
, sem
,
max
, min
, median
, first
, last
, ohlc
:
In [294]: ts.resample("5Min").mean()
Out[294]:
2012-01-01 251.03
Freq: 5T, dtype: float64
In [295]: ts.resample("5Min").ohlc()
Out[295]:
open high low close
2012-01-01 308 460 9 205
In [296]: ts.resample("5Min").max()
Out[296]:
2012-01-01 460
Freq: 5T, dtype: int64
For downsampling, closed
can be set to ‘left’ or ‘right’ to specify which
end of the interval is closed:
In [297]: ts.resample("5Min", closed="right").mean()
Out[297]:
2011-12-31 23:55:00 308.000000
2012-01-01 00:00:00 250.454545
Freq: 5T, dtype: float64
In [298]: ts.resample("5Min", closed="left").mean()
Out[298]:
2012-01-01 251.03
Freq: 5T, dtype: float64
Parameters like label
are used to manipulate the resulting labels.
label
specifies whether the result is labeled with the beginning or
the end of the interval.
In [299]: ts.resample("5Min").mean() # by default label='left'
Out[299]:
2012-01-01 251.03
Freq: 5T, dtype: float64
In [300]: ts.resample("5Min", label="left").mean()
Out[300]:
2012-01-01 251.03
Freq: 5T, dtype: float64
Warning
The default values for label
and closed
is ‘left’ for all
frequency offsets except for ‘M’, ‘A’, ‘Q’, ‘BM’, ‘BA’, ‘BQ’, and ‘W’
which all have a default of ‘right’.
This might unintendedly lead to looking ahead, where the value for a later
time is pulled back to a previous time as in the following example with
the BusinessDay
frequency:
In [301]: s = pd.date_range("2000-01-01", "2000-01-05").to_series()
In [302]: s.iloc[2] = pd.NaT
In [303]: s.dt.day_name()
Out[303]:
2000-01-01 Saturday
2000-01-02 Sunday
2000-01-03 NaN
2000-01-04 Tuesday
2000-01-05 Wednesday
Freq: D, dtype: object
# default: label='left', closed='left'
In [304]: s.resample("B").last().dt.day_name()
Out[304]:
1999-12-31 Sunday
2000-01-03 NaN
2000-01-04 Tuesday
2000-01-05 Wednesday
Freq: B, dtype: object
Notice how the value for Sunday got pulled back to the previous Friday. To get the behavior where the value for Sunday is pushed to Monday, use instead
In [305]: s.resample("B", label="right", closed="right").last().dt.day_name()
Out[305]:
2000-01-03 Sunday
2000-01-04 Tuesday
2000-01-05 Wednesday
Freq: B, dtype: object
The axis
parameter can be set to 0 or 1 and allows you to resample the
specified axis for a DataFrame
.
kind
can be set to ‘timestamp’ or ‘period’ to convert the resulting index
to/from timestamp and time span representations. By default resample
retains the input representation.
convention
can be set to ‘start’ or ‘end’ when resampling period data
(detail below). It specifies how low frequency periods are converted to higher
frequency periods.
Upsampling#
For upsampling, you can specify a way to upsample and the limit
parameter to interpolate over the gaps that are created:
# from secondly to every 250 milliseconds
In [306]: ts[:2].resample("250L").asfreq()
Out[306]:
2012-01-01 00:00:00.000 308.0
2012-01-01 00:00:00.250 NaN
2012-01-01 00:00:00.500 NaN
2012-01-01 00:00:00.750 NaN
2012-01-01 00:00:01.000 204.0
Freq: 250L, dtype: float64
In [307]: ts[:2].resample("250L").ffill()
Out[307]:
2012-01-01 00:00:00.000 308
2012-01-01 00:00:00.250 308
2012-01-01 00:00:00.500 308
2012-01-01 00:00:00.750 308
2012-01-01 00:00:01.000 204
Freq: 250L, dtype: int64
In [308]: ts[:2].resample("250L").ffill(limit=2)
Out[308]:
2012-01-01 00:00:00.000 308.0
2012-01-01 00:00:00.250 308.0
2012-01-01 00:00:00.500 308.0
2012-01-01 00:00:00.750 NaN
2012-01-01 00:00:01.000 204.0
Freq: 250L, dtype: float64
Sparse resampling#
Sparse timeseries are the ones where you have a lot fewer points relative
to the amount of time you are looking to resample. Naively upsampling a sparse
series can potentially generate lots of intermediate values. When you don’t want
to use a method to fill these values, e.g. fill_method
is None
, then
intermediate values will be filled with NaN
.
Since resample
is a time-based groupby, the following is a method to efficiently
resample only the groups that are not all NaN
.
In [309]: rng = pd.date_range("2014-1-1", periods=100, freq="D") + pd.Timedelta("1s")
In [310]: ts = pd.Series(range(100), index=rng)
If we want to resample to the full range of the series:
In [311]: ts.resample("3T").sum()
Out[311]:
2014-01-01 00:00:00 0
2014-01-01 00:03:00 0
2014-01-01 00:06:00 0
2014-01-01 00:09:00 0
2014-01-01 00:12:00 0
..
2014-04-09 23:48:00 0
2014-04-09 23:51:00 0
2014-04-09 23:54:00 0
2014-04-09 23:57:00 0
2014-04-10 00:00:00 99
Freq: 3T, Length: 47521, dtype: int64
We can instead only resample those groups where we have points as follows:
In [312]: from functools import partial
In [313]: from pandas.tseries.frequencies import to_offset
In [314]: def round(t, freq):
.....: freq = to_offset(freq)
.....: return pd.Timestamp((t.value // freq.delta.value) * freq.delta.value)
.....:
In [315]: ts.groupby(partial(round, freq="3T")).sum()
Out[315]:
2014-01-01 0
2014-01-02 1
2014-01-03 2
2014-01-04 3
2014-01-05 4
..
2014-04-06 95
2014-04-07 96
2014-04-08 97
2014-04-09 98
2014-04-10 99
Length: 100, dtype: int64
Aggregation#
The resample()
method returns a pandas.api.typing.Resampler
instance. Similar to
the aggregating API, groupby API,
and the window API, a Resampler
can be selectively resampled.
Resampling a DataFrame
, the default will be to act on all columns with the same function.
In [316]: df = pd.DataFrame(
.....: np.random.randn(1000, 3),
.....: index=pd.date_range("1/1/2012", freq="S", periods=1000),
.....: columns=["A", "B", "C"],
.....: )
.....:
In [317]: r = df.resample("3T")
In [318]: r.mean()
Out[318]:
A B C
2012-01-01 00:00:00 -0.033823 -0.121514 -0.081447
2012-01-01 00:03:00 0.056909 0.146731 -0.024320
2012-01-01 00:06:00 -0.058837 0.047046 -0.052021
2012-01-01 00:09:00 0.063123 -0.026158 -0.066533
2012-01-01 00:12:00 0.186340 -0.003144 0.074752
2012-01-01 00:15:00 -0.085954 -0.016287 -0.050046
We can select a specific column or columns using standard getitem.
In [319]: r["A"].mean()
Out[319]:
2012-01-01 00:00:00 -0.033823
2012-01-01 00:03:00 0.056909
2012-01-01 00:06:00 -0.058837
2012-01-01 00:09:00 0.063123
2012-01-01 00:12:00 0.186340
2012-01-01 00:15:00 -0.085954
Freq: 3T, Name: A, dtype: float64
In [320]: r[["A", "B"]].mean()
Out[320]:
A B
2012-01-01 00:00:00 -0.033823 -0.121514
2012-01-01 00:03:00 0.056909 0.146731
2012-01-01 00:06:00 -0.058837 0.047046
2012-01-01 00:09:00 0.063123 -0.026158
2012-01-01 00:12:00 0.186340 -0.003144
2012-01-01 00:15:00 -0.085954 -0.016287
You can pass a list or dict of functions to do aggregation with, outputting a DataFrame
:
In [321]: r["A"].agg(["sum", "mean", "std"])
Out[321]:
sum mean std
2012-01-01 00:00:00 -6.088060 -0.033823 1.043263
2012-01-01 00:03:00 10.243678 0.056909 1.058534
2012-01-01 00:06:00 -10.590584 -0.058837 0.949264
2012-01-01 00:09:00 11.362228 0.063123 1.028096
2012-01-01 00:12:00 33.541257 0.186340 0.884586
2012-01-01 00:15:00 -8.595393 -0.085954 1.035476
On a resampled DataFrame
, you can pass a list of functions to apply to each
column, which produces an aggregated result with a hierarchical index:
In [322]: r.agg(["sum", "mean"])
Out[322]:
A ... C
sum mean ... sum mean
2012-01-01 00:00:00 -6.088060 -0.033823 ... -14.660515 -0.081447
2012-01-01 00:03:00 10.243678 0.056909 ... -4.377642 -0.024320
2012-01-01 00:06:00 -10.590584 -0.058837 ... -9.363825 -0.052021
2012-01-01 00:09:00 11.362228 0.063123 ... -11.975895 -0.066533
2012-01-01 00:12:00 33.541257 0.186340 ... 13.455299 0.074752
2012-01-01 00:15:00 -8.595393 -0.085954 ... -5.004580 -0.050046
[6 rows x 6 columns]
By passing a dict to aggregate
you can apply a different aggregation to the
columns of a DataFrame
:
In [323]: r.agg({"A": "sum", "B": lambda x: np.std(x, ddof=1)})
Out[323]:
A B
2012-01-01 00:00:00 -6.088060 1.001294
2012-01-01 00:03:00 10.243678 1.074597
2012-01-01 00:06:00 -10.590584 0.987309
2012-01-01 00:09:00 11.362228 0.944953
2012-01-01 00:12:00 33.541257 1.095025
2012-01-01 00:15:00 -8.595393 1.035312
The function names can also be strings. In order for a string to be valid it must be implemented on the resampled object:
In [324]: r.agg({"A": "sum", "B": "std"})
Out[324]:
A B
2012-01-01 00:00:00 -6.088060 1.001294
2012-01-01 00:03:00 10.243678 1.074597
2012-01-01 00:06:00 -10.590584 0.987309
2012-01-01 00:09:00 11.362228 0.944953
2012-01-01 00:12:00 33.541257 1.095025
2012-01-01 00:15:00 -8.595393 1.035312
Furthermore, you can also specify multiple aggregation functions for each column separately.
In [325]: r.agg({"A": ["sum", "std"], "B": ["mean", "std"]})
Out[325]:
A B
sum std mean std
2012-01-01 00:00:00 -6.088060 1.043263 -0.121514 1.001294
2012-01-01 00:03:00 10.243678 1.058534 0.146731 1.074597
2012-01-01 00:06:00 -10.590584 0.949264 0.047046 0.987309
2012-01-01 00:09:00 11.362228 1.028096 -0.026158 0.944953
2012-01-01 00:12:00 33.541257 0.884586 -0.003144 1.095025
2012-01-01 00:15:00 -8.595393 1.035476 -0.016287 1.035312
If a DataFrame
does not have a datetimelike index, but instead you want
to resample based on datetimelike column in the frame, it can passed to the
on
keyword.
In [326]: df = pd.DataFrame(
.....: {"date": pd.date_range("2015-01-01", freq="W", periods=5), "a": np.arange(5)},
.....: index=pd.MultiIndex.from_arrays(
.....: [[1, 2, 3, 4, 5], pd.date_range("2015-01-01", freq="W", periods=5)],
.....: names=["v", "d"],
.....: ),
.....: )
.....:
In [327]: df
Out[327]:
date a
v d
1 2015-01-04 2015-01-04 0
2 2015-01-11 2015-01-11 1
3 2015-01-18 2015-01-18 2
4 2015-01-25 2015-01-25 3
5 2015-02-01 2015-02-01 4
In [328]: df.resample("M", on="date")[["a"]].sum()
Out[328]:
a
date
2015-01-31 6
2015-02-28 4
Similarly, if you instead want to resample by a datetimelike
level of MultiIndex
, its name or location can be passed to the
level
keyword.
In [329]: df.resample("M", level="d")[["a"]].sum()
Out[329]:
a
d
2015-01-31 6
2015-02-28 4
Iterating through groups#
With the Resampler
object in hand, iterating through the grouped data is very
natural and functions similarly to itertools.groupby()
:
In [330]: small = pd.Series(
.....: range(6),
.....: index=pd.to_datetime(
.....: [
.....: "2017-01-01T00:00:00",
.....: "2017-01-01T00:30:00",
.....: "2017-01-01T00:31:00",
.....: "2017-01-01T01:00:00",
.....: "2017-01-01T03:00:00",
.....: "2017-01-01T03:05:00",
.....: ]
.....: ),
.....: )
.....:
In [331]: resampled = small.resample("H")
In [332]: for name, group in resampled:
.....: print("Group: ", name)
.....: print("-" * 27)
.....: print(group, end="\n\n")
.....:
Group: 2017-01-01 00:00:00
---------------------------
2017-01-01 00:00:00 0
2017-01-01 00:30:00 1
2017-01-01 00:31:00 2
dtype: int64
Group: 2017-01-01 01:00:00
---------------------------
2017-01-01 01:00:00 3
dtype: int64
Group: 2017-01-01 02:00:00
---------------------------
Series([], dtype: int64)
Group: 2017-01-01 03:00:00
---------------------------
2017-01-01 03:00:00 4
2017-01-01 03:05:00 5
dtype: int64
See Iterating through groups or Resampler.__iter__
for more.
Use origin
or offset
to adjust the start of the bins#
The bins of the grouping are adjusted based on the beginning of the day of the time series starting point. This works well with frequencies that are multiples of a day (like 30D
) or that divide a day evenly (like 90s
or 1min
). This can create inconsistencies with some frequencies that do not meet this criteria. To change this behavior you can specify a fixed Timestamp with the argument origin
.
For example:
In [333]: start, end = "2000-10-01 23:30:00", "2000-10-02 00:30:00"
In [334]: middle = "2000-10-02 00:00:00"
In [335]: rng = pd.date_range(start, end, freq="7min")
In [336]: ts = pd.Series(np.arange(len(rng)) * 3, index=rng)
In [337]: ts
Out[337]:
2000-10-01 23:30:00 0
2000-10-01 23:37:00 3
2000-10-01 23:44:00 6
2000-10-01 23:51:00 9
2000-10-01 23:58:00 12
2000-10-02 00:05:00 15
2000-10-02 00:12:00 18
2000-10-02 00:19:00 21
2000-10-02 00:26:00 24
Freq: 7T, dtype: int64
Here we can see that, when using origin
with its default value ('start_day'
), the result after '2000-10-02 00:00:00'
are not identical depending on the start of time series:
In [338]: ts.resample("17min", origin="start_day").sum()
Out[338]:
2000-10-01 23:14:00 0
2000-10-01 23:31:00 9
2000-10-01 23:48:00 21
2000-10-02 00:05:00 54
2000-10-02 00:22:00 24
Freq: 17T, dtype: int64
In [339]: ts[middle:end].resample("17min", origin="start_day").sum()
Out[339]:
2000-10-02 00:00:00 33
2000-10-02 00:17:00 45
Freq: 17T, dtype: int64
Here we can see that, when setting origin
to 'epoch'
, the result after '2000-10-02 00:00:00'
are identical depending on the start of time series:
In [340]: ts.resample("17min", origin="epoch").sum()
Out[340]:
2000-10-01 23:18:00 0
2000-10-01 23:35:00 18
2000-10-01 23:52:00 27
2000-10-02 00:09:00 39
2000-10-02 00:26:00 24
Freq: 17T, dtype: int64
In [341]: ts[middle:end].resample("17min", origin="epoch").sum()
Out[341]:
2000-10-01 23:52:00 15
2000-10-02 00:09:00 39
2000-10-02 00:26:00 24
Freq: 17T, dtype: int64
If needed you can use a custom timestamp for origin
:
In [342]: ts.resample("17min", origin="2001-01-01").sum()
Out[342]:
2000-10-01 23:30:00 9
2000-10-01 23:47:00 21
2000-10-02 00:04:00 54
2000-10-02 00:21:00 24
2000-10-02 00:38:00 0
..
2000-12-31 22:52:00 0
2000-12-31 23:09:00 0
2000-12-31 23:26:00 0
2000-12-31 23:43:00 0
2001-01-01 00:00:00 0
Freq: 17T, Length: 7711, dtype: int64
In [343]: ts[middle:end].resample("17min", origin=pd.Timestamp("2001-01-01")).sum()
Out[343]:
2000-10-02 00:04:00 54
2000-10-02 00:21:00 24
2000-10-02 00:38:00 0
2000-10-02 00:55:00 0
2000-10-02 01:12:00 0
..
2000-12-31 22:52:00 0
2000-12-31 23:09:00 0
2000-12-31 23:26:00 0
2000-12-31 23:43:00 0
2001-01-01 00:00:00 0
Freq: 17T, Length: 7709, dtype: int64
If needed you can just adjust the bins with an offset
Timedelta that would be added to the default origin
.
Those two examples are equivalent for this time series:
In [344]: ts.resample("17min", origin="start").sum()
Out[344]:
2000-10-01 23:30:00 9
2000-10-01 23:47:00 21
2000-10-02 00:04:00 54
2000-10-02 00:21:00 24
Freq: 17T, dtype: int64
In [345]: ts.resample("17min", offset="23h30min").sum()
Out[345]:
2000-10-01 23:30:00 9
2000-10-01 23:47:00 21
2000-10-02 00:04:00 54
2000-10-02 00:21:00 24
Freq: 17T, dtype: int64
Note the use of 'start'
for origin
on the last example. In that case, origin
will be set to the first value of the timeseries.
Backward resample#
New in version 1.3.0.
Instead of adjusting the beginning of bins, sometimes we need to fix the end of the bins to make a backward resample with a given freq
. The backward resample sets closed
to 'right'
by default since the last value should be considered as the edge point for the last bin.
We can set origin
to 'end'
. The value for a specific Timestamp
index stands for the resample result from the current Timestamp
minus freq
to the current Timestamp
with a right close.
In [346]: ts.resample('17min', origin='end').sum()
Out[346]:
2000-10-01 23:35:00 0
2000-10-01 23:52:00 18
2000-10-02 00:09:00 27
2000-10-02 00:26:00 63
Freq: 17T, dtype: int64
Besides, in contrast with the 'start_day'
option, end_day
is supported. This will set the origin as the ceiling midnight of the largest Timestamp
.
In [347]: ts.resample('17min', origin='end_day').sum()
Out[347]:
2000-10-01 23:38:00 3
2000-10-01 23:55:00 15
2000-10-02 00:12:00 45
2000-10-02 00:29:00 45
Freq: 17T, dtype: int64
The above result uses 2000-10-02 00:29:00
as the last bin’s right edge since the following computation.
In [348]: ceil_mid = rng.max().ceil('D')
In [349]: freq = pd.offsets.Minute(17)
In [350]: bin_res = ceil_mid - freq * ((ceil_mid - rng.max()) // freq)
In [351]: bin_res
Out[351]: Timestamp('2000-10-02 00:29:00')
Time span representation#
Regular intervals of time are represented by Period
objects in pandas while
sequences of Period
objects are collected in a PeriodIndex
, which can
be created with the convenience function period_range
.
Period#
A Period
represents a span of time (e.g., a day, a month, a quarter, etc).
You can specify the span via freq
keyword using a frequency alias like below.
Because freq
represents a span of Period
, it cannot be negative like “-3D”.
In [352]: pd.Period("2012", freq="A-DEC")
Out[352]: Period('2012', 'A-DEC')
In [353]: pd.Period("2012-1-1", freq="D")
Out[353]: Period('2012-01-01', 'D')
In [354]: pd.Period("2012-1-1 19:00", freq="H")
Out[354]: Period('2012-01-01 19:00', 'H')
In [355]: pd.Period("2012-1-1 19:00", freq="5H")
Out[355]: Period('2012-01-01 19:00', '5H')
Adding and subtracting integers from periods shifts the period by its own
frequency. Arithmetic is not allowed between Period
with different freq
(span).
In [356]: p = pd.Period("2012", freq="A-DEC")
In [357]: p + 1
Out[357]: Period('2013', 'A-DEC')
In [358]: p - 3
Out[358]: Period('2009', 'A-DEC')
In [359]: p = pd.Period("2012-01", freq="2M")
In [360]: p + 2
Out[360]: Period('2012-05', '2M')
In [361]: p - 1
Out[361]: Period('2011-11', '2M')
In [362]: p == pd.Period("2012-01", freq="3M")
Out[362]: False
If Period
freq is daily or higher (D
, H
, T
, S
, L
, U
, N
), offsets
and timedelta
-like can be added if the result can have the same freq. Otherwise, ValueError
will be raised.
In [363]: p = pd.Period("2014-07-01 09:00", freq="H")
In [364]: p + pd.offsets.Hour(2)
Out[364]: Period('2014-07-01 11:00', 'H')
In [365]: p + datetime.timedelta(minutes=120)
Out[365]: Period('2014-07-01 11:00', 'H')
In [366]: p + np.timedelta64(7200, "s")
Out[366]: Period('2014-07-01 11:00', 'H')
In [367]: p + pd.offsets.Minute(5)
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
File period.pyx:1821, in pandas._libs.tslibs.period._Period._add_timedeltalike_scalar()
File timedeltas.pyx:282, in pandas._libs.tslibs.timedeltas.delta_to_nanoseconds()
File np_datetime.pyx:608, in pandas._libs.tslibs.np_datetime.convert_reso()
ValueError: Cannot losslessly convert units
The above exception was the direct cause of the following exception:
IncompatibleFrequency Traceback (most recent call last)
Cell In[367], line 1
----> 1 p + pd.offsets.Minute(5)
File period.pyx:1848, in pandas._libs.tslibs.period._Period.__add__()
File period.pyx:1823, in pandas._libs.tslibs.period._Period._add_timedeltalike_scalar()
IncompatibleFrequency: Input cannot be converted to Period(freq=H)
If Period
has other frequencies, only the same offsets
can be added. Otherwise, ValueError
will be raised.
In [368]: p = pd.Period("2014-07", freq="M")
In [369]: p + pd.offsets.MonthEnd(3)
Out[369]: Period('2014-10', 'M')
In [370]: p + pd.offsets.MonthBegin(3)
---------------------------------------------------------------------------
IncompatibleFrequency Traceback (most recent call last)
Cell In[370], line 1
----> 1 p + pd.offsets.MonthBegin(3)
File period.pyx:1850, in pandas._libs.tslibs.period._Period.__add__()
File period.pyx:1834, in pandas._libs.tslibs.period._Period._add_offset()
File period.pyx:1720, in pandas._libs.tslibs.period.PeriodMixin._require_matching_freq()
IncompatibleFrequency: Input has different freq=3MS from Period(freq=M)
Taking the difference of Period
instances with the same frequency will
return the number of frequency units between them:
In [371]: pd.Period("2012", freq="A-DEC") - pd.Period("2002", freq="A-DEC")
Out[371]: <10 * YearEnds: month=12>
PeriodIndex and period_range#
Regular sequences of Period
objects can be collected in a PeriodIndex
,
which can be constructed using the period_range
convenience function:
In [372]: prng = pd.period_range("1/1/2011", "1/1/2012", freq="M")
In [373]: prng
Out[373]:
PeriodIndex(['2011-01', '2011-02', '2011-03', '2011-04', '2011-05', '2011-06',
'2011-07', '2011-08', '2011-09', '2011-10', '2011-11', '2011-12',
'2012-01'],
dtype='period[M]')
The PeriodIndex
constructor can also be used directly:
In [374]: pd.PeriodIndex(["2011-1", "2011-2", "2011-3"], freq="M")
Out[374]: PeriodIndex(['2011-01', '2011-02', '2011-03'], dtype='period[M]')
Passing multiplied frequency outputs a sequence of Period
which
has multiplied span.
In [375]: pd.period_range(start="2014-01", freq="3M", periods=4)
Out[375]: PeriodIndex(['2014-01', '2014-04', '2014-07', '2014-10'], dtype='period[3M]')
If start
or end
are Period
objects, they will be used as anchor
endpoints for a PeriodIndex
with frequency matching that of the
PeriodIndex
constructor.
In [376]: pd.period_range(
.....: start=pd.Period("2017Q1", freq="Q"), end=pd.Period("2017Q2", freq="Q"), freq="M"
.....: )
.....:
Out[376]: PeriodIndex(['2017-03', '2017-04', '2017-05', '2017-06'], dtype='period[M]')
Just like DatetimeIndex
, a PeriodIndex
can also be used to index pandas
objects:
In [377]: ps = pd.Series(np.random.randn(len(prng)), prng)
In [378]: ps
Out[378]:
2011-01 -2.916901
2011-02 0.514474
2011-03 1.346470
2011-04 0.816397
2011-05 2.258648
2011-06 0.494789
2011-07 0.301239
2011-08 0.464776
2011-09 -1.393581
2011-10 0.056780
2011-11 0.197035
2011-12 2.261385
2012-01 -0.329583
Freq: M, dtype: float64
PeriodIndex
supports addition and subtraction with the same rule as Period
.
In [379]: idx = pd.period_range("2014-07-01 09:00", periods=5, freq="H")
In [380]: idx
Out[380]:
PeriodIndex(['2014-07-01 09:00', '2014-07-01 10:00', '2014-07-01 11:00',
'2014-07-01 12:00', '2014-07-01 13:00'],
dtype='period[H]')
In [381]: idx + pd.offsets.Hour(2)
Out[381]:
PeriodIndex(['2014-07-01 11:00', '2014-07-01 12:00', '2014-07-01 13:00',
'2014-07-01 14:00', '2014-07-01 15:00'],
dtype='period[H]')
In [382]: idx = pd.period_range("2014-07", periods=5, freq="M")
In [383]: idx
Out[383]: PeriodIndex(['2014-07', '2014-08', '2014-09', '2014-10', '2014-11'], dtype='period[M]')
In [384]: idx + pd.offsets.MonthEnd(3)
Out[384]: PeriodIndex(['2014-10', '2014-11', '2014-12', '2015-01', '2015-02'], dtype='period[M]')
PeriodIndex
has its own dtype named period
, refer to Period Dtypes.
Period dtypes#
PeriodIndex
has a custom period
dtype. This is a pandas extension
dtype similar to the timezone aware dtype (datetime64[ns, tz]
).
The period
dtype holds the freq
attribute and is represented with
period[freq]
like period[D]
or period[M]
, using frequency strings.
In [385]: pi = pd.period_range("2016-01-01", periods=3, freq="M")
In [386]: pi
Out[386]: PeriodIndex(['2016-01', '2016-02', '2016-03'], dtype='period[M]')
In [387]: pi.dtype
Out[387]: period[M]
The period
dtype can be used in .astype(...)
. It allows one to change the
freq
of a PeriodIndex
like .asfreq()
and convert a
DatetimeIndex
to PeriodIndex
like to_period()
:
# change monthly freq to daily freq
In [388]: pi.astype("period[D]")
Out[388]: PeriodIndex(['2016-01-31', '2016-02-29', '2016-03-31'], dtype='period[D]')
# convert to DatetimeIndex
In [389]: pi.astype("datetime64[ns]")
Out[389]: DatetimeIndex(['2016-01-01', '2016-02-01', '2016-03-01'], dtype='datetime64[ns]', freq='MS')
# convert to PeriodIndex
In [390]: dti = pd.date_range("2011-01-01", freq="M", periods=3)
In [391]: dti
Out[391]: DatetimeIndex(['2011-01-31', '2011-02-28', '2011-03-31'], dtype='datetime64[ns]', freq='M')
In [392]: dti.astype("period[M]")
Out[392]: PeriodIndex(['2011-01', '2011-02', '2011-03'], dtype='period[M]')
PeriodIndex partial string indexing#
PeriodIndex now supports partial string slicing with non-monotonic indexes.
You can pass in dates and strings to Series
and DataFrame
with PeriodIndex
, in the same manner as DatetimeIndex
. For details, refer to DatetimeIndex Partial String Indexing.
In [393]: ps["2011-01"]
Out[393]: -2.9169013294054507
In [394]: ps[datetime.datetime(2011, 12, 25):]
Out[394]:
2011-12 2.261385
2012-01 -0.329583
Freq: M, dtype: float64
In [395]: ps["10/31/2011":"12/31/2011"]
Out[395]:
2011-10 0.056780
2011-11 0.197035
2011-12 2.261385
Freq: M, dtype: float64
Passing a string representing a lower frequency than PeriodIndex
returns partial sliced data.
In [396]: ps["2011"]
Out[396]:
2011-01 -2.916901
2011-02 0.514474
2011-03 1.346470
2011-04 0.816397
2011-05 2.258648
2011-06 0.494789
2011-07 0.301239
2011-08 0.464776
2011-09 -1.393581
2011-10 0.056780
2011-11 0.197035
2011-12 2.261385
Freq: M, dtype: float64
In [397]: dfp = pd.DataFrame(
.....: np.random.randn(600, 1),
.....: columns=["A"],
.....: index=pd.period_range("2013-01-01 9:00", periods=600, freq="T"),
.....: )
.....:
In [398]: dfp
Out[398]:
A
2013-01-01 09:00 -0.538468
2013-01-01 09:01 -1.365819
2013-01-01 09:02 -0.969051
2013-01-01 09:03 -0.331152
2013-01-01 09:04 -0.245334
... ...
2013-01-01 18:55 0.522460
2013-01-01 18:56 0.118710
2013-01-01 18:57 0.167517
2013-01-01 18:58 0.922883
2013-01-01 18:59 1.721104
[600 rows x 1 columns]
In [399]: dfp.loc["2013-01-01 10H"]
Out[399]:
A
2013-01-01 10:00 -0.308975
2013-01-01 10:01 0.542520
2013-01-01 10:02 1.061068
2013-01-01 10:03 0.754005
2013-01-01 10:04 0.352933
... ...
2013-01-01 10:55 -0.865621
2013-01-01 10:56 -1.167818
2013-01-01 10:57 -2.081748
2013-01-01 10:58 -0.527146
2013-01-01 10:59 0.802298
[60 rows x 1 columns]
As with DatetimeIndex
, the endpoints will be included in the result. The example below slices data starting from 10:00 to 11:59.
In [400]: dfp["2013-01-01 10H":"2013-01-01 11H"]
Out[400]:
A
2013-01-01 10:00 -0.308975
2013-01-01 10:01 0.542520
2013-01-01 10:02 1.061068
2013-01-01 10:03 0.754005
2013-01-01 10:04 0.352933
... ...
2013-01-01 11:55 -0.590204
2013-01-01 11:56 1.539990
2013-01-01 11:57 -1.224826
2013-01-01 11:58 0.578798
2013-01-01 11:59 -0.685496
[120 rows x 1 columns]
Frequency conversion and resampling with PeriodIndex#
The frequency of Period
and PeriodIndex
can be converted via the asfreq
method. Let’s start with the fiscal year 2011, ending in December:
In [401]: p = pd.Period("2011", freq="A-DEC")
In [402]: p
Out[402]: Period('2011', 'A-DEC')
We can convert it to a monthly frequency. Using the how
parameter, we can
specify whether to return the starting or ending month:
In [403]: p.asfreq("M", how="start")
Out[403]: Period('2011-01', 'M')
In [404]: p.asfreq("M", how="end")
Out[404]: Period('2011-12', 'M')
The shorthands ‘s’ and ‘e’ are provided for convenience:
In [405]: p.asfreq("M", "s")
Out[405]: Period('2011-01', 'M')
In [406]: p.asfreq("M", "e")
Out[406]: Period('2011-12', 'M')
Converting to a “super-period” (e.g., annual frequency is a super-period of quarterly frequency) automatically returns the super-period that includes the input period:
In [407]: p = pd.Period("2011-12", freq="M")
In [408]: p.asfreq("A-NOV")
Out[408]: Period('2012', 'A-NOV')
Note that since we converted to an annual frequency that ends the year in November, the monthly period of December 2011 is actually in the 2012 A-NOV period.
Period conversions with anchored frequencies are particularly useful for
working with various quarterly data common to economics, business, and other
fields. Many organizations define quarters relative to the month in which their
fiscal year starts and ends. Thus, first quarter of 2011 could start in 2010 or
a few months into 2011. Via anchored frequencies, pandas works for all quarterly
frequencies Q-JAN
through Q-DEC
.
Q-DEC
define regular calendar quarters:
In [409]: p = pd.Period("2012Q1", freq="Q-DEC")
In [410]: p.asfreq("D", "s")
Out[410]: Period('2012-01-01', 'D')
In [411]: p.asfreq("D", "e")
Out[411]: Period('2012-03-31', 'D')
Q-MAR
defines fiscal year end in March:
In [412]: p = pd.Period("2011Q4", freq="Q-MAR")
In [413]: p.asfreq("D", "s")
Out[413]: Period('2011-01-01', 'D')
In [414]: p.asfreq("D", "e")
Out[414]: Period('2011-03-31', 'D')
Converting between representations#
Timestamped data can be converted to PeriodIndex-ed data using to_period
and vice-versa using to_timestamp
:
In [415]: rng = pd.date_range("1/1/2012", periods=5, freq="M")
In [416]: ts = pd.Series(np.random.randn(len(rng)), index=rng)
In [417]: ts
Out[417]:
2012-01-31 1.931253
2012-02-29 -0.184594
2012-03-31 0.249656
2012-04-30 -0.978151
2012-05-31 -0.873389
Freq: M, dtype: float64
In [418]: ps = ts.to_period()
In [419]: ps
Out[419]:
2012-01 1.931253
2012-02 -0.184594
2012-03 0.249656
2012-04 -0.978151
2012-05 -0.873389
Freq: M, dtype: float64
In [420]: ps.to_timestamp()
Out[420]:
2012-01-01 1.931253
2012-02-01 -0.184594
2012-03-01 0.249656
2012-04-01 -0.978151
2012-05-01 -0.873389
Freq: MS, dtype: float64
Remember that ‘s’ and ‘e’ can be used to return the timestamps at the start or end of the period:
In [421]: ps.to_timestamp("D", how="s")
Out[421]:
2012-01-01 1.931253
2012-02-01 -0.184594
2012-03-01 0.249656
2012-04-01 -0.978151
2012-05-01 -0.873389
Freq: MS, dtype: float64
Converting between period and timestamp enables some convenient arithmetic functions to be used. In the following example, we convert a quarterly frequency with year ending in November to 9am of the end of the month following the quarter end:
In [422]: prng = pd.period_range("1990Q1", "2000Q4", freq="Q-NOV")
In [423]: ts = pd.Series(np.random.randn(len(prng)), prng)
In [424]: ts.index = (prng.asfreq("M", "e") + 1).asfreq("H", "s") + 9
In [425]: ts.head()
Out[425]:
1990-03-01 09:00 -0.109291
1990-06-01 09:00 -0.637235
1990-09-01 09:00 -1.735925
1990-12-01 09:00 2.096946
1991-03-01 09:00 -1.039926
Freq: H, dtype: float64
Representing out-of-bounds spans#
If you have data that is outside of the Timestamp
bounds, see Timestamp limitations,
then you can use a PeriodIndex
and/or Series
of Periods
to do computations.
In [426]: span = pd.period_range("1215-01-01", "1381-01-01", freq="D")
In [427]: span
Out[427]:
PeriodIndex(['1215-01-01', '1215-01-02', '1215-01-03', '1215-01-04',
'1215-01-05', '1215-01-06', '1215-01-07', '1215-01-08',
'1215-01-09', '1215-01-10',
...
'1380-12-23', '1380-12-24', '1380-12-25', '1380-12-26',
'1380-12-27', '1380-12-28', '1380-12-29', '1380-12-30',
'1380-12-31', '1381-01-01'],
dtype='period[D]', length=60632)
To convert from an int64
based YYYYMMDD representation.
In [428]: s = pd.Series([20121231, 20141130, 99991231])
In [429]: s
Out[429]:
0 20121231
1 20141130
2 99991231
dtype: int64
In [430]: def conv(x):
.....: return pd.Period(year=x // 10000, month=x // 100 % 100, day=x % 100, freq="D")
.....:
In [431]: s.apply(conv)
Out[431]:
0 2012-12-31
1 2014-11-30
2 9999-12-31
dtype: period[D]
In [432]: s.apply(conv)[2]
Out[432]: Period('9999-12-31', 'D')
These can easily be converted to a PeriodIndex
:
In [433]: span = pd.PeriodIndex(s.apply(conv))
In [434]: span
Out[434]: PeriodIndex(['2012-12-31', '2014-11-30', '9999-12-31'], dtype='period[D]')
Time zone handling#
pandas provides rich support for working with timestamps in different time
zones using the pytz
and dateutil
libraries or datetime.timezone
objects from the standard library.
Working with time zones#
By default, pandas objects are time zone unaware:
In [435]: rng = pd.date_range("3/6/2012 00:00", periods=15, freq="D")
In [436]: rng.tz is None
Out[436]: True
To localize these dates to a time zone (assign a particular time zone to a naive date),
you can use the tz_localize
method or the tz
keyword argument in
date_range()
, Timestamp
, or DatetimeIndex
.
You can either pass pytz
or dateutil
time zone objects or Olson time zone database strings.
Olson time zone strings will return pytz
time zone objects by default.
To return dateutil
time zone objects, append dateutil/
before the string.
In
pytz
you can find a list of common (and less common) time zones usingfrom pytz import common_timezones, all_timezones
.dateutil
uses the OS time zones so there isn’t a fixed list available. For common zones, the names are the same aspytz
.
In [437]: import dateutil
# pytz
In [438]: rng_pytz = pd.date_range("3/6/2012 00:00", periods=3, freq="D", tz="Europe/London")
In [439]: rng_pytz.tz
Out[439]: <DstTzInfo 'Europe/London' LMT-1 day, 23:59:00 STD>
# dateutil
In [440]: rng_dateutil = pd.date_range("3/6/2012 00:00", periods=3, freq="D")
In [441]: rng_dateutil = rng_dateutil.tz_localize("dateutil/Europe/London")
In [442]: rng_dateutil.tz
Out[442]: tzfile('/usr/share/zoneinfo/Europe/London')
# dateutil - utc special case
In [443]: rng_utc = pd.date_range(
.....: "3/6/2012 00:00",
.....: periods=3,
.....: freq="D",
.....: tz=dateutil.tz.tzutc(),
.....: )
.....:
In [444]: rng_utc.tz
Out[444]: tzutc()
# datetime.timezone
In [445]: rng_utc = pd.date_range(
.....: "3/6/2012 00:00",
.....: periods=3,
.....: freq="D",
.....: tz=datetime.timezone.utc,
.....: )
.....:
In [446]: rng_utc.tz
Out[446]: datetime.timezone.utc
Note that the UTC
time zone is a special case in dateutil
and should be constructed explicitly
as an instance of dateutil.tz.tzutc
. You can also construct other time
zones objects explicitly first.
In [447]: import pytz
# pytz
In [448]: tz_pytz = pytz.timezone("Europe/London")
In [449]: rng_pytz = pd.date_range("3/6/2012 00:00", periods=3, freq="D")
In [450]: rng_pytz = rng_pytz.tz_localize(tz_pytz)
In [451]: rng_pytz.tz == tz_pytz
Out[451]: True
# dateutil
In [452]: tz_dateutil = dateutil.tz.gettz("Europe/London")
In [453]: rng_dateutil = pd.date_range("3/6/2012 00:00", periods=3, freq="D", tz=tz_dateutil)
In [454]: rng_dateutil.tz == tz_dateutil
Out[454]: True
To convert a time zone aware pandas object from one time zone to another,
you can use the tz_convert
method.
In [455]: rng_pytz.tz_convert("US/Eastern")
Out[455]:
DatetimeIndex(['2012-03-05 19:00:00-05:00', '2012-03-06 19:00:00-05:00',
'2012-03-07 19:00:00-05:00'],
dtype='datetime64[ns, US/Eastern]', freq=None)
Note
When using pytz
time zones, DatetimeIndex
will construct a different
time zone object than a Timestamp
for the same time zone input. A DatetimeIndex
can hold a collection of Timestamp
objects that may have different UTC offsets and cannot be
succinctly represented by one pytz
time zone instance while one Timestamp
represents one point in time with a specific UTC offset.
In [456]: dti = pd.date_range("2019-01-01", periods=3, freq="D", tz="US/Pacific")
In [457]: dti.tz
Out[457]: <DstTzInfo 'US/Pacific' LMT-1 day, 16:07:00 STD>
In [458]: ts = pd.Timestamp("2019-01-01", tz="US/Pacific")
In [459]: ts.tz
Out[459]: <DstTzInfo 'US/Pacific' PST-1 day, 16:00:00 STD>
Warning
Be wary of conversions between libraries. For some time zones, pytz
and dateutil
have different
definitions of the zone. This is more of a problem for unusual time zones than for
‘standard’ zones like US/Eastern
.
Warning
Be aware that a time zone definition across versions of time zone libraries may not be considered equal. This may cause problems when working with stored data that is localized using one version and operated on with a different version. See here for how to handle such a situation.
Warning
For pytz
time zones, it is incorrect to pass a time zone object directly into
the datetime.datetime
constructor
(e.g., datetime.datetime(2011, 1, 1, tzinfo=pytz.timezone('US/Eastern'))
.
Instead, the datetime needs to be localized using the localize
method
on the pytz
time zone object.
Warning
Be aware that for times in the future, correct conversion between time zones (and UTC) cannot be guaranteed by any time zone library because a timezone’s offset from UTC may be changed by the respective government.
Warning
If you are using dates beyond 2038-01-18, due to current deficiencies in the underlying libraries caused by the year 2038 problem, daylight saving time (DST) adjustments to timezone aware dates will not be applied. If and when the underlying libraries are fixed, the DST transitions will be applied.
For example, for two dates that are in British Summer Time (and so would normally be GMT+1), both the following asserts evaluate as true:
In [460]: d_2037 = "2037-03-31T010101"
In [461]: d_2038 = "2038-03-31T010101"
In [462]: DST = "Europe/London"
In [463]: assert pd.Timestamp(d_2037, tz=DST) != pd.Timestamp(d_2037, tz="GMT")
In [464]: assert pd.Timestamp(d_2038, tz=DST) == pd.Timestamp(d_2038, tz="GMT")
Under the hood, all timestamps are stored in UTC. Values from a time zone aware
DatetimeIndex
or Timestamp
will have their fields (day, hour, minute, etc.)
localized to the time zone. However, timestamps with the same UTC value are
still considered to be equal even if they are in different time zones:
In [465]: rng_eastern = rng_utc.tz_convert("US/Eastern")
In [466]: rng_berlin = rng_utc.tz_convert("Europe/Berlin")
In [467]: rng_eastern[2]
Out[467]: Timestamp('2012-03-07 19:00:00-0500', tz='US/Eastern')
In [468]: rng_berlin[2]
Out[468]: Timestamp('2012-03-08 01:00:00+0100', tz='Europe/Berlin')
In [469]: rng_eastern[2] == rng_berlin[2]
Out[469]: True
Operations between Series
in different time zones will yield UTC
Series
, aligning the data on the UTC timestamps:
In [470]: ts_utc = pd.Series(range(3), pd.date_range("20130101", periods=3, tz="UTC"))
In [471]: eastern = ts_utc.tz_convert("US/Eastern")
In [472]: berlin = ts_utc.tz_convert("Europe/Berlin")
In [473]: result = eastern + berlin
In [474]: result
Out[474]:
2013-01-01 00:00:00+00:00 0
2013-01-02 00:00:00+00:00 2
2013-01-03 00:00:00+00:00 4
Freq: D, dtype: int64
In [475]: result.index
Out[475]:
DatetimeIndex(['2013-01-01 00:00:00+00:00', '2013-01-02 00:00:00+00:00',
'2013-01-03 00:00:00+00:00'],
dtype='datetime64[ns, UTC]', freq='D')
To remove time zone information, use tz_localize(None)
or tz_convert(None)
.
tz_localize(None)
will remove the time zone yielding the local time representation.
tz_convert(None)
will remove the time zone after converting to UTC time.
In [476]: didx = pd.date_range(start="2014-08-01 09:00", freq="H", periods=3, tz="US/Eastern")
In [477]: didx
Out[477]:
DatetimeIndex(['2014-08-01 09:00:00-04:00', '2014-08-01 10:00:00-04:00',
'2014-08-01 11:00:00-04:00'],
dtype='datetime64[ns, US/Eastern]', freq='H')
In [478]: didx.tz_localize(None)
Out[478]:
DatetimeIndex(['2014-08-01 09:00:00', '2014-08-01 10:00:00',
'2014-08-01 11:00:00'],
dtype='datetime64[ns]', freq=None)
In [479]: didx.tz_convert(None)
Out[479]:
DatetimeIndex(['2014-08-01 13:00:00', '2014-08-01 14:00:00',
'2014-08-01 15:00:00'],
dtype='datetime64[ns]', freq='H')
# tz_convert(None) is identical to tz_convert('UTC').tz_localize(None)
In [480]: didx.tz_convert("UTC").tz_localize(None)
Out[480]:
DatetimeIndex(['2014-08-01 13:00:00', '2014-08-01 14:00:00',
'2014-08-01 15:00:00'],
dtype='datetime64[ns]', freq=None)
Fold#
For ambiguous times, pandas supports explicitly specifying the keyword-only fold argument.
Due to daylight saving time, one wall clock time can occur twice when shifting
from summer to winter time; fold describes whether the datetime-like corresponds
to the first (0) or the second time (1) the wall clock hits the ambiguous time.
Fold is supported only for constructing from naive datetime.datetime
(see datetime documentation for details) or from Timestamp
or for constructing from components (see below). Only dateutil
timezones are supported
(see dateutil documentation
for dateutil
methods that deal with ambiguous datetimes) as pytz
timezones do not support fold (see pytz documentation
for details on how pytz
deals with ambiguous datetimes). To localize an ambiguous datetime
with pytz
, please use Timestamp.tz_localize()
. In general, we recommend to rely
on Timestamp.tz_localize()
when localizing ambiguous datetimes if you need direct
control over how they are handled.
In [481]: pd.Timestamp(
.....: datetime.datetime(2019, 10, 27, 1, 30, 0, 0),
.....: tz="dateutil/Europe/London",
.....: fold=0,
.....: )
.....:
Out[481]: Timestamp('2019-10-27 01:30:00+0100', tz='dateutil//usr/share/zoneinfo/Europe/London')
In [482]: pd.Timestamp(
.....: year=2019,
.....: month=10,
.....: day=27,
.....: hour=1,
.....: minute=30,
.....: tz="dateutil/Europe/London",
.....: fold=1,
.....: )
.....:
Out[482]: Timestamp('2019-10-27 01:30:00+0000', tz='dateutil//usr/share/zoneinfo/Europe/London')
Ambiguous times when localizing#
tz_localize
may not be able to determine the UTC offset of a timestamp
because daylight savings time (DST) in a local time zone causes some times to occur
twice within one day (“clocks fall back”). The following options are available:
'raise'
: Raises apytz.AmbiguousTimeError
(the default behavior)'infer'
: Attempt to determine the correct offset base on the monotonicity of the timestamps'NaT'
: Replaces ambiguous times withNaT
bool
:True
represents a DST time,False
represents non-DST time. An array-like ofbool
values is supported for a sequence of times.
In [483]: rng_hourly = pd.DatetimeIndex(
.....: ["11/06/2011 00:00", "11/06/2011 01:00", "11/06/2011 01:00", "11/06/2011 02:00"]
.....: )
.....:
This will fail as there are ambiguous times ('11/06/2011 01:00'
)
In [484]: rng_hourly.tz_localize('US/Eastern')
---------------------------------------------------------------------------
AmbiguousTimeError Traceback (most recent call last)
Cell In[484], line 1
----> 1 rng_hourly.tz_localize('US/Eastern')
File ~/work/pandas/pandas/pandas/core/indexes/datetimes.py:291, in DatetimeIndex.tz_localize(self, tz, ambiguous, nonexistent)
284 @doc(DatetimeArray.tz_localize)
285 def tz_localize(
286 self,
(...)
289 nonexistent: TimeNonexistent = "raise",
290 ) -> Self:
--> 291 arr = self._data.tz_localize(tz, ambiguous, nonexistent)
292 return type(self)._simple_new(arr, name=self.name)
File ~/work/pandas/pandas/pandas/core/arrays/_mixins.py:80, in ravel_compat.<locals>.method(self, *args, **kwargs)
77 @wraps(meth)
78 def method(self, *args, **kwargs):
79 if self.ndim == 1:
---> 80 return meth(self, *args, **kwargs)
82 flags = self._ndarray.flags
83 flat = self.ravel("K")
File ~/work/pandas/pandas/pandas/core/arrays/datetimes.py:1066, in DatetimeArray.tz_localize(self, tz, ambiguous, nonexistent)
1063 tz = timezones.maybe_get_tz(tz)
1064 # Convert to UTC
-> 1066 new_dates = tzconversion.tz_localize_to_utc(
1067 self.asi8,
1068 tz,
1069 ambiguous=ambiguous,
1070 nonexistent=nonexistent,
1071 creso=self._creso,
1072 )
1073 new_dates_dt64 = new_dates.view(f"M8[{self.unit}]")
1074 dtype = tz_to_dtype(tz, unit=self.unit)
File tzconversion.pyx:368, in pandas._libs.tslibs.tzconversion.tz_localize_to_utc()
AmbiguousTimeError: Cannot infer dst time from 2011-11-06 01:00:00, try using the 'ambiguous' argument
Handle these ambiguous times by specifying the following.
In [485]: rng_hourly.tz_localize("US/Eastern", ambiguous="infer")
Out[485]:
DatetimeIndex(['2011-11-06 00:00:00-04:00', '2011-11-06 01:00:00-04:00',
'2011-11-06 01:00:00-05:00', '2011-11-06 02:00:00-05:00'],
dtype='datetime64[ns, US/Eastern]', freq=None)
In [486]: rng_hourly.tz_localize("US/Eastern", ambiguous="NaT")
Out[486]:
DatetimeIndex(['2011-11-06 00:00:00-04:00', 'NaT', 'NaT',
'2011-11-06 02:00:00-05:00'],
dtype='datetime64[ns, US/Eastern]', freq=None)
In [487]: rng_hourly.tz_localize("US/Eastern", ambiguous=[True, True, False, False])
Out[487]:
DatetimeIndex(['2011-11-06 00:00:00-04:00', '2011-11-06 01:00:00-04:00',
'2011-11-06 01:00:00-05:00', '2011-11-06 02:00:00-05:00'],
dtype='datetime64[ns, US/Eastern]', freq=None)
Nonexistent times when localizing#
A DST transition may also shift the local time ahead by 1 hour creating nonexistent
local times (“clocks spring forward”). The behavior of localizing a timeseries with nonexistent times
can be controlled by the nonexistent
argument. The following options are available:
'raise'
: Raises apytz.NonExistentTimeError
(the default behavior)'NaT'
: Replaces nonexistent times withNaT
'shift_forward'
: Shifts nonexistent times forward to the closest real time'shift_backward'
: Shifts nonexistent times backward to the closest real timetimedelta object: Shifts nonexistent times by the timedelta duration
In [488]: dti = pd.date_range(start="2015-03-29 02:30:00", periods=3, freq="H")
# 2:30 is a nonexistent time
Localization of nonexistent times will raise an error by default.
In [489]: dti.tz_localize('Europe/Warsaw')
---------------------------------------------------------------------------
NonExistentTimeError Traceback (most recent call last)
Cell In[489], line 1
----> 1 dti.tz_localize('Europe/Warsaw')
File ~/work/pandas/pandas/pandas/core/indexes/datetimes.py:291, in DatetimeIndex.tz_localize(self, tz, ambiguous, nonexistent)
284 @doc(DatetimeArray.tz_localize)
285 def tz_localize(
286 self,
(...)
289 nonexistent: TimeNonexistent = "raise",
290 ) -> Self:
--> 291 arr = self._data.tz_localize(tz, ambiguous, nonexistent)
292 return type(self)._simple_new(arr, name=self.name)
File ~/work/pandas/pandas/pandas/core/arrays/_mixins.py:80, in ravel_compat.<locals>.method(self, *args, **kwargs)
77 @wraps(meth)
78 def method(self, *args, **kwargs):
79 if self.ndim == 1:
---> 80 return meth(self, *args, **kwargs)
82 flags = self._ndarray.flags
83 flat = self.ravel("K")
File ~/work/pandas/pandas/pandas/core/arrays/datetimes.py:1066, in DatetimeArray.tz_localize(self, tz, ambiguous, nonexistent)
1063 tz = timezones.maybe_get_tz(tz)
1064 # Convert to UTC
-> 1066 new_dates = tzconversion.tz_localize_to_utc(
1067 self.asi8,
1068 tz,
1069 ambiguous=ambiguous,
1070 nonexistent=nonexistent,
1071 creso=self._creso,
1072 )
1073 new_dates_dt64 = new_dates.view(f"M8[{self.unit}]")
1074 dtype = tz_to_dtype(tz, unit=self.unit)
File tzconversion.pyx:423, in pandas._libs.tslibs.tzconversion.tz_localize_to_utc()
NonExistentTimeError: 2015-03-29 02:30:00
Transform nonexistent times to NaT
or shift the times.
In [490]: dti
Out[490]:
DatetimeIndex(['2015-03-29 02:30:00', '2015-03-29 03:30:00',
'2015-03-29 04:30:00'],
dtype='datetime64[ns]', freq='H')
In [491]: dti.tz_localize("Europe/Warsaw", nonexistent="shift_forward")
Out[491]:
DatetimeIndex(['2015-03-29 03:00:00+02:00', '2015-03-29 03:30:00+02:00',
'2015-03-29 04:30:00+02:00'],
dtype='datetime64[ns, Europe/Warsaw]', freq=None)
In [492]: dti.tz_localize("Europe/Warsaw", nonexistent="shift_backward")
Out[492]:
DatetimeIndex(['2015-03-29 01:59:59.999999999+01:00',
'2015-03-29 03:30:00+02:00',
'2015-03-29 04:30:00+02:00'],
dtype='datetime64[ns, Europe/Warsaw]', freq=None)
In [493]: dti.tz_localize("Europe/Warsaw", nonexistent=pd.Timedelta(1, unit="H"))
Out[493]:
DatetimeIndex(['2015-03-29 03:30:00+02:00', '2015-03-29 03:30:00+02:00',
'2015-03-29 04:30:00+02:00'],
dtype='datetime64[ns, Europe/Warsaw]', freq=None)
In [494]: dti.tz_localize("Europe/Warsaw", nonexistent="NaT")
Out[494]:
DatetimeIndex(['NaT', '2015-03-29 03:30:00+02:00',
'2015-03-29 04:30:00+02:00'],
dtype='datetime64[ns, Europe/Warsaw]', freq=None)
Time zone Series operations#
A Series
with time zone naive values is
represented with a dtype of datetime64[ns]
.
In [495]: s_naive = pd.Series(pd.date_range("20130101", periods=3))
In [496]: s_naive
Out[496]:
0 2013-01-01
1 2013-01-02
2 2013-01-03
dtype: datetime64[ns]
A Series
with a time zone aware values is
represented with a dtype of datetime64[ns, tz]
where tz
is the time zone
In [497]: s_aware = pd.Series(pd.date_range("20130101", periods=3, tz="US/Eastern"))
In [498]: s_aware
Out[498]:
0 2013-01-01 00:00:00-05:00
1 2013-01-02 00:00:00-05:00
2 2013-01-03 00:00:00-05:00
dtype: datetime64[ns, US/Eastern]
Both of these Series
time zone information
can be manipulated via the .dt
accessor, see the dt accessor section.
For example, to localize and convert a naive stamp to time zone aware.
In [499]: s_naive.dt.tz_localize("UTC").dt.tz_convert("US/Eastern")
Out[499]:
0 2012-12-31 19:00:00-05:00
1 2013-01-01 19:00:00-05:00
2 2013-01-02 19:00:00-05:00
dtype: datetime64[ns, US/Eastern]
Time zone information can also be manipulated using the astype
method.
This method can convert between different timezone-aware dtypes.
# convert to a new time zone
In [500]: s_aware.astype("datetime64[ns, CET]")
Out[500]:
0 2013-01-01 06:00:00+01:00
1 2013-01-02 06:00:00+01:00
2 2013-01-03 06:00:00+01:00
dtype: datetime64[ns, CET]
Note
Using Series.to_numpy()
on a Series
, returns a NumPy array of the data.
NumPy does not currently support time zones (even though it is printing in the local time zone!),
therefore an object array of Timestamps is returned for time zone aware data:
In [501]: s_naive.to_numpy()
Out[501]:
array(['2013-01-01T00:00:00.000000000', '2013-01-02T00:00:00.000000000',
'2013-01-03T00:00:00.000000000'], dtype='datetime64[ns]')
In [502]: s_aware.to_numpy()
Out[502]:
array([Timestamp('2013-01-01 00:00:00-0500', tz='US/Eastern'),
Timestamp('2013-01-02 00:00:00-0500', tz='US/Eastern'),
Timestamp('2013-01-03 00:00:00-0500', tz='US/Eastern')],
dtype=object)
By converting to an object array of Timestamps, it preserves the time zone information. For example, when converting back to a Series:
In [503]: pd.Series(s_aware.to_numpy())
Out[503]:
0 2013-01-01 00:00:00-05:00
1 2013-01-02 00:00:00-05:00
2 2013-01-03 00:00:00-05:00
dtype: datetime64[ns, US/Eastern]
However, if you want an actual NumPy datetime64[ns]
array (with the values
converted to UTC) instead of an array of objects, you can specify the
dtype
argument:
In [504]: s_aware.to_numpy(dtype="datetime64[ns]")
Out[504]:
array(['2013-01-01T05:00:00.000000000', '2013-01-02T05:00:00.000000000',
'2013-01-03T05:00:00.000000000'], dtype='datetime64[ns]')