What’s new in 0.24.0 (January 25, 2019)#
Warning
The 0.24.x series of releases will be the last to support Python 2. Future feature releases will support Python 3 only. See Dropping Python 2.7 for more details.
This is a major release from 0.23.4 and includes a number of API changes, new features, enhancements, and performance improvements along with a large number of bug fixes.
Highlights include:
Check the API Changes and deprecations before updating.
These are the changes in pandas 0.24.0. See Release notes for a full changelog including other versions of pandas.
Enhancements#
Optional integer NA support#
pandas has gained the ability to hold integer dtypes with missing values. This long requested feature is enabled through the use of extension types.
Note
IntegerArray is currently experimental. Its API or implementation may change without warning.
We can construct a Series
with the specified dtype. The dtype string Int64
is a pandas ExtensionDtype
. Specifying a list or array using the traditional missing value
marker of np.nan
will infer to integer dtype. The display of the Series
will also use the NaN
to indicate missing values in string outputs. (GH20700, GH20747, GH22441, GH21789, GH22346)
In [1]: s = pd.Series([1, 2, np.nan], dtype='Int64')
In [2]: s
Out[2]:
0 1
1 2
2 <NA>
Length: 3, dtype: Int64
Operations on these dtypes will propagate NaN
as other pandas operations.
# arithmetic
In [3]: s + 1
Out[3]:
0 2
1 3
2 <NA>
Length: 3, dtype: Int64
# comparison
In [4]: s == 1
Out[4]:
0 True
1 False
2 <NA>
Length: 3, dtype: boolean
# indexing
In [5]: s.iloc[1:3]
Out[5]:
1 2
2 <NA>
Length: 2, dtype: Int64
# operate with other dtypes
In [6]: s + s.iloc[1:3].astype('Int8')
Out[6]:
0 <NA>
1 4
2 <NA>
Length: 3, dtype: Int64
# coerce when needed
In [7]: s + 0.01
Out[7]:
0 1.01
1 2.01
2 <NA>
Length: 3, dtype: Float64
These dtypes can operate as part of a DataFrame
.
In [8]: df = pd.DataFrame({'A': s, 'B': [1, 1, 3], 'C': list('aab')})
In [9]: df
Out[9]:
A B C
0 1 1 a
1 2 1 a
2 <NA> 3 b
[3 rows x 3 columns]
In [10]: df.dtypes
Out[10]:
A Int64
B int64
C object
Length: 3, dtype: object
These dtypes can be merged, reshaped, and casted.
In [11]: pd.concat([df[['A']], df[['B', 'C']]], axis=1).dtypes
Out[11]:
A Int64
B int64
C object
Length: 3, dtype: object
In [12]: df['A'].astype(float)
Out[12]:
0 1.0
1 2.0
2 NaN
Name: A, Length: 3, dtype: float64
Reduction and groupby operations such as sum
work.
In [13]: df.sum()
Out[13]:
A 3
B 5
C aab
Length: 3, dtype: object
In [14]: df.groupby('B').A.sum()
Out[14]:
B
1 3
3 0
Name: A, Length: 2, dtype: Int64
Warning
The Integer NA support currently uses the capitalized dtype version, e.g. Int8
as compared to the traditional int8
. This may be changed at a future date.
See Nullable integer data type for more.
Accessing the values in a Series or Index#
Series.array
and Index.array
have been added for extracting the array backing a
Series
or Index
. (GH19954, GH23623)
In [15]: idx = pd.period_range('2000', periods=4)
In [16]: idx.array
Out[16]:
<PeriodArray>
['2000-01-01', '2000-01-02', '2000-01-03', '2000-01-04']
Length: 4, dtype: period[D]
In [17]: pd.Series(idx).array
Out[17]:
<PeriodArray>
['2000-01-01', '2000-01-02', '2000-01-03', '2000-01-04']
Length: 4, dtype: period[D]
Historically, this would have been done with series.values
, but with
.values
it was unclear whether the returned value would be the actual array,
some transformation of it, or one of pandas custom arrays (like
Categorical
). For example, with PeriodIndex
, .values
generates
a new ndarray of period objects each time.
In [18]: idx.values
Out[18]:
array([Period('2000-01-01', 'D'), Period('2000-01-02', 'D'),
Period('2000-01-03', 'D'), Period('2000-01-04', 'D')], dtype=object)
In [19]: id(idx.values)
Out[19]: 140563567026064
In [20]: id(idx.values)
Out[20]: 140562147735760
If you need an actual NumPy array, use Series.to_numpy()
or Index.to_numpy()
.
In [21]: idx.to_numpy()
Out[21]:
array([Period('2000-01-01', 'D'), Period('2000-01-02', 'D'),
Period('2000-01-03', 'D'), Period('2000-01-04', 'D')], dtype=object)
In [22]: pd.Series(idx).to_numpy()
Out[22]:
array([Period('2000-01-01', 'D'), Period('2000-01-02', 'D'),
Period('2000-01-03', 'D'), Period('2000-01-04', 'D')], dtype=object)
For Series and Indexes backed by normal NumPy arrays, Series.array
will return a
new arrays.PandasArray
, which is a thin (no-copy) wrapper around a
numpy.ndarray
. PandasArray
isn’t especially useful on its own,
but it does provide the same interface as any extension array defined in pandas or by
a third-party library.
In [23]: ser = pd.Series([1, 2, 3])
In [24]: ser.array
Out[24]:
<PandasArray>
[1, 2, 3]
Length: 3, dtype: int64
In [25]: ser.to_numpy()
Out[25]: array([1, 2, 3])
We haven’t removed or deprecated Series.values
or DataFrame.values
, but we
highly recommend and using .array
or .to_numpy()
instead.
See Dtypes and Attributes and Underlying Data for more.
pandas.array
: a new top-level method for creating arrays#
A new top-level method array()
has been added for creating 1-dimensional arrays (GH22860).
This can be used to create any extension array, including
extension arrays registered by 3rd party libraries.
See the dtypes docs for more on extension arrays.
In [26]: pd.array([1, 2, np.nan], dtype='Int64')
Out[26]:
<IntegerArray>
[1, 2, <NA>]
Length: 3, dtype: Int64
In [27]: pd.array(['a', 'b', 'c'], dtype='category')
Out[27]:
['a', 'b', 'c']
Categories (3, object): ['a', 'b', 'c']
Passing data for which there isn’t dedicated extension type (e.g. float, integer, etc.)
will return a new arrays.PandasArray
, which is just a thin (no-copy)
wrapper around a numpy.ndarray
that satisfies the pandas extension array interface.
In [28]: pd.array([1, 2, 3])
Out[28]:
<IntegerArray>
[1, 2, 3]
Length: 3, dtype: Int64
On their own, a PandasArray
isn’t a very useful object.
But if you need write low-level code that works generically for any
ExtensionArray
, PandasArray
satisfies that need.
Notice that by default, if no dtype
is specified, the dtype of the returned
array is inferred from the data. In particular, note that the first example of
[1, 2, np.nan]
would have returned a floating-point array, since NaN
is a float.
In [29]: pd.array([1, 2, np.nan])
Out[29]:
<IntegerArray>
[1, 2, <NA>]
Length: 3, dtype: Int64
Storing Interval and Period data in Series and DataFrame#
Interval
and Period
data may now be stored in a Series
or DataFrame
, in addition to an
IntervalIndex
and PeriodIndex
like previously (GH19453, GH22862).
In [30]: ser = pd.Series(pd.interval_range(0, 5))
In [31]: ser
Out[31]:
0 (0, 1]
1 (1, 2]
2 (2, 3]
3 (3, 4]
4 (4, 5]
Length: 5, dtype: interval
In [32]: ser.dtype
Out[32]: interval[int64, right]
For periods:
In [33]: pser = pd.Series(pd.period_range("2000", freq="D", periods=5))
In [34]: pser
Out[34]:
0 2000-01-01
1 2000-01-02
2 2000-01-03
3 2000-01-04
4 2000-01-05
Length: 5, dtype: period[D]
In [35]: pser.dtype
Out[35]: period[D]
Previously, these would be cast to a NumPy array with object dtype. In general,
this should result in better performance when storing an array of intervals or periods
in a Series
or column of a DataFrame
.
Use Series.array
to extract the underlying array of intervals or periods
from the Series
:
In [36]: ser.array
Out[36]:
<IntervalArray>
[(0, 1], (1, 2], (2, 3], (3, 4], (4, 5]]
Length: 5, dtype: interval[int64, right]
In [37]: pser.array
Out[37]:
<PeriodArray>
['2000-01-01', '2000-01-02', '2000-01-03', '2000-01-04', '2000-01-05']
Length: 5, dtype: period[D]
These return an instance of arrays.IntervalArray
or arrays.PeriodArray
,
the new extension arrays that back interval and period data.
Warning
For backwards compatibility, Series.values
continues to return
a NumPy array of objects for Interval and Period data. We recommend
using Series.array
when you need the array of data stored in the
Series
, and Series.to_numpy()
when you know you need a NumPy array.
See Dtypes and Attributes and Underlying Data for more.
Joining with two multi-indexes#
DataFrame.merge()
and DataFrame.join()
can now be used to join multi-indexed Dataframe
instances on the overlapping index levels (GH6360)
See the Merge, join, and concatenate documentation section.
In [38]: index_left = pd.MultiIndex.from_tuples([('K0', 'X0'), ('K0', 'X1'),
....: ('K1', 'X2')],
....: names=['key', 'X'])
....:
In [39]: left = pd.DataFrame({'A': ['A0', 'A1', 'A2'],
....: 'B': ['B0', 'B1', 'B2']}, index=index_left)
....:
In [40]: index_right = pd.MultiIndex.from_tuples([('K0', 'Y0'), ('K1', 'Y1'),
....: ('K2', 'Y2'), ('K2', 'Y3')],
....: names=['key', 'Y'])
....:
In [41]: right = pd.DataFrame({'C': ['C0', 'C1', 'C2', 'C3'],
....: 'D': ['D0', 'D1', 'D2', 'D3']}, index=index_right)
....:
In [42]: left.join(right)
Out[42]:
A B C D
key X Y
K0 X0 Y0 A0 B0 C0 D0
X1 Y0 A1 B1 C0 D0
K1 X2 Y1 A2 B2 C1 D1
[3 rows x 4 columns]
For earlier versions this can be done using the following.
In [43]: pd.merge(left.reset_index(), right.reset_index(),
....: on=['key'], how='inner').set_index(['key', 'X', 'Y'])
....:
Out[43]:
A B C D
key X Y
K0 X0 Y0 A0 B0 C0 D0
X1 Y0 A1 B1 C0 D0
K1 X2 Y1 A2 B2 C1 D1
[3 rows x 4 columns]
Function read_html
enhancements#
read_html()
previously ignored colspan
and rowspan
attributes.
Now it understands them, treating them as sequences of cells with the same
value. (GH17054)
In [44]: result = pd.read_html("""
....: <table>
....: <thead>
....: <tr>
....: <th>A</th><th>B</th><th>C</th>
....: </tr>
....: </thead>
....: <tbody>
....: <tr>
....: <td colspan="2">1</td><td>2</td>
....: </tr>
....: </tbody>
....: </table>""")
....:
Previous behavior:
In [13]: result
Out [13]:
[ A B C
0 1 2 NaN]
New behavior:
In [45]: result
Out[45]:
[ A B C
0 1 1 2
[1 rows x 3 columns]]
New Styler.pipe()
method#
The Styler
class has gained a
pipe()
method. This provides a
convenient way to apply users’ predefined styling functions, and can help reduce
“boilerplate” when using DataFrame styling functionality repeatedly within a notebook. (GH23229)
In [46]: df = pd.DataFrame({'N': [1250, 1500, 1750], 'X': [0.25, 0.35, 0.50]})
In [47]: def format_and_align(styler):
....: return (styler.format({'N': '{:,}', 'X': '{:.1%}'})
....: .set_properties(**{'text-align': 'right'}))
....:
In [48]: df.style.pipe(format_and_align).set_caption('Summary of results.')
Out[48]: <pandas.io.formats.style.Styler at 0x7fd702414070>
Similar methods already exist for other classes in pandas, including DataFrame.pipe()
,
GroupBy.pipe()
, and Resampler.pipe()
.
Renaming names in a MultiIndex#
DataFrame.rename_axis()
now supports index
and columns
arguments
and Series.rename_axis()
supports index
argument (GH19978).
This change allows a dictionary to be passed so that some of the names
of a MultiIndex
can be changed.
Example:
In [49]: mi = pd.MultiIndex.from_product([list('AB'), list('CD'), list('EF')],
....: names=['AB', 'CD', 'EF'])
....:
In [50]: df = pd.DataFrame(list(range(len(mi))), index=mi, columns=['N'])
In [51]: df
Out[51]:
N
AB CD EF
A C E 0
F 1
D E 2
F 3
B C E 4
F 5
D E 6
F 7
[8 rows x 1 columns]
In [52]: df.rename_axis(index={'CD': 'New'})
Out[52]:
N
AB New EF
A C E 0
F 1
D E 2
F 3
B C E 4
F 5
D E 6
F 7
[8 rows x 1 columns]
See the Advanced documentation on renaming for more details.
Other enhancements#
merge()
now directly allows merge between objects of typeDataFrame
and namedSeries
, without the need to convert theSeries
object into aDataFrame
beforehand (GH21220)ExcelWriter
now acceptsmode
as a keyword argument, enabling append to existing workbooks when using theopenpyxl
engine (GH3441)FrozenList
has gained the.union()
and.difference()
methods. This functionality greatly simplifies groupby’s that rely on explicitly excluding certain columns. See Splitting an object into groups for more information (GH15475, GH15506).DataFrame.to_parquet()
now acceptsindex
as an argument, allowing the user to override the engine’s default behavior to include or omit the dataframe’s indexes from the resulting Parquet file. (GH20768)read_feather()
now acceptscolumns
as an argument, allowing the user to specify which columns should be read. (GH24025)DataFrame.corr()
andSeries.corr()
now accept a callable for generic calculation methods of correlation, e.g. histogram intersection (GH22684)DataFrame.to_string()
now acceptsdecimal
as an argument, allowing the user to specify which decimal separator should be used in the output. (GH23614)DataFrame.to_html()
now acceptsrender_links
as an argument, allowing the user to generate HTML with links to any URLs that appear in the DataFrame. See the section on writing HTML in the IO docs for example usage. (GH2679)pandas.read_csv()
now supports pandas extension types as an argument todtype
, allowing the user to use pandas extension types when reading CSVs. (GH23228)The
shift()
method now acceptsfill_value
as an argument, allowing the user to specify a value which will be used instead of NA/NaT in the empty periods. (GH15486)to_datetime()
now supports the%Z
and%z
directive when passed intoformat
(GH13486)Series.mode()
andDataFrame.mode()
now support thedropna
parameter which can be used to specify whetherNaN
/NaT
values should be considered (GH17534)DataFrame.to_csv()
andSeries.to_csv()
now support thecompression
keyword when a file handle is passed. (GH21227)Index.droplevel()
is now implemented also for flat indexes, for compatibility withMultiIndex
(GH21115)Series.droplevel()
andDataFrame.droplevel()
are now implemented (GH20342)Added support for reading from/writing to Google Cloud Storage via the
gcsfs
library (GH19454, GH23094)DataFrame.to_gbq()
andread_gbq()
signature and documentation updated to reflect changes from the pandas-gbq library version 0.8.0. Adds acredentials
argument, which enables the use of any kind of google-auth credentials. (GH21627, GH22557, GH23662)New method
HDFStore.walk()
will recursively walk the group hierarchy of an HDF5 file (GH10932)read_html()
copies cell data acrosscolspan
androwspan
, and it treats all-th
table rows as headers ifheader
kwarg is not given and there is nothead
(GH17054)Series.nlargest()
,Series.nsmallest()
,DataFrame.nlargest()
, andDataFrame.nsmallest()
now accept the value"all"
for thekeep
argument. This keeps all ties for the nth largest/smallest value (GH16818)IntervalIndex
has gained theset_closed()
method to change the existingclosed
value (GH21670)to_csv()
,to_csv()
,to_json()
, andto_json()
now supportcompression='infer'
to infer compression based on filename extension (GH15008). The default compression forto_csv
,to_json
, andto_pickle
methods has been updated to'infer'
(GH22004).DataFrame.to_sql()
now supports writingTIMESTAMP WITH TIME ZONE
types for supported databases. For databases that don’t support timezones, datetime data will be stored as timezone unaware local timestamps. See the Datetime data types for implications (GH9086).to_timedelta()
now supports iso-formatted timedelta strings (GH21877)Series
andDataFrame
now supportIterable
objects in the constructor (GH2193)DatetimeIndex
has gained theDatetimeIndex.timetz
attribute. This returns the local time with timezone information. (GH21358)round()
,ceil()
, andfloor()
forDatetimeIndex
andTimestamp
now support anambiguous
argument for handling datetimes that are rounded to ambiguous times (GH18946) and anonexistent
argument for handling datetimes that are rounded to nonexistent times. See Nonexistent times when localizing (GH22647)The result of
resample()
is now iterable similar togroupby()
(GH15314).Series.resample()
andDataFrame.resample()
have gained thepandas.core.resample.Resampler.quantile()
(GH15023).DataFrame.resample()
andSeries.resample()
with aPeriodIndex
will now respect thebase
argument in the same fashion as with aDatetimeIndex
. (GH23882)pandas.api.types.is_list_like()
has gained a keywordallow_sets
which isTrue
by default; ifFalse
, all instances ofset
will not be considered “list-like” anymore (GH23061)Index.to_frame()
now supports overriding column name(s) (GH22580).Categorical.from_codes()
now can take adtype
parameter as an alternative to passingcategories
andordered
(GH24398).New attribute
__git_version__
will return git commit sha of current build (GH21295).Compatibility with Matplotlib 3.0 (GH22790).
Added
Interval.overlaps()
,arrays.IntervalArray.overlaps()
, andIntervalIndex.overlaps()
for determining overlaps between interval-like objects (GH21998)read_fwf()
now accepts keywordinfer_nrows
(GH15138).to_parquet()
now supports writing aDataFrame
as a directory of parquet files partitioned by a subset of the columns whenengine = 'pyarrow'
(GH23283)Timestamp.tz_localize()
,DatetimeIndex.tz_localize()
, andSeries.tz_localize()
have gained thenonexistent
argument for alternative handling of nonexistent times. See Nonexistent times when localizing (GH8917, GH24466)Index.difference()
,Index.intersection()
,Index.union()
, andIndex.symmetric_difference()
now have an optionalsort
parameter to control whether the results should be sorted if possible (GH17839, GH24471)read_excel()
now acceptsusecols
as a list of column names or callable (GH18273)MultiIndex.to_flat_index()
has been added to flatten multiple levels into a single-levelIndex
object.DataFrame.to_stata()
andpandas.io.stata.StataWriter117
can write mixed string columns to Stata strl format (GH23633)DataFrame.between_time()
andDataFrame.at_time()
have gained theaxis
parameter (GH8839)DataFrame.to_records()
now acceptsindex_dtypes
andcolumn_dtypes
parameters to allow different data types in stored column and index records (GH18146)IntervalIndex
has gained theis_overlapping
attribute to indicate if theIntervalIndex
contains any overlapping intervals (GH23309)pandas.DataFrame.to_sql()
has gained themethod
argument to control SQL insertion clause. See the insertion method section in the documentation. (GH8953)DataFrame.corrwith()
now supports Spearman’s rank correlation, Kendall’s tau as well as callable correlation methods. (GH21925)DataFrame.to_json()
,DataFrame.to_csv()
,DataFrame.to_pickle()
, and other export methods now support tilde(~) in path argument. (GH23473)
Backwards incompatible API changes#
pandas 0.24.0 includes a number of API breaking changes.
Increased minimum versions for dependencies#
We have updated our minimum supported versions of dependencies (GH21242, GH18742, GH23774, GH24767). If installed, we now require:
Package |
Minimum Version |
Required |
---|---|---|
numpy |
1.12.0 |
X |
bottleneck |
1.2.0 |
|
fastparquet |
0.2.1 |
|
matplotlib |
2.0.0 |
|
numexpr |
2.6.1 |
|
pandas-gbq |
0.8.0 |
|
pyarrow |
0.9.0 |
|
pytables |
3.4.2 |
|
scipy |
0.18.1 |
|
xlrd |
1.0.0 |
|
pytest (dev) |
3.6 |
Additionally we no longer depend on feather-format
for feather based storage
and replaced it with references to pyarrow
(GH21639 and GH23053).
os.linesep
is used for line_terminator
of DataFrame.to_csv
#
DataFrame.to_csv()
now uses os.linesep()
rather than '\n'
for the default line terminator (GH20353).
This change only affects when running on Windows, where '\r\n'
was used for line terminator
even when '\n'
was passed in line_terminator
.
Previous behavior on Windows:
In [1]: data = pd.DataFrame({"string_with_lf": ["a\nbc"],
...: "string_with_crlf": ["a\r\nbc"]})
In [2]: # When passing file PATH to to_csv,
...: # line_terminator does not work, and csv is saved with '\r\n'.
...: # Also, this converts all '\n's in the data to '\r\n'.
...: data.to_csv("test.csv", index=False, line_terminator='\n')
In [3]: with open("test.csv", mode='rb') as f:
...: print(f.read())
Out[3]: b'string_with_lf,string_with_crlf\r\n"a\r\nbc","a\r\r\nbc"\r\n'
In [4]: # When passing file OBJECT with newline option to
...: # to_csv, line_terminator works.
...: with open("test2.csv", mode='w', newline='\n') as f:
...: data.to_csv(f, index=False, line_terminator='\n')
In [5]: with open("test2.csv", mode='rb') as f:
...: print(f.read())
Out[5]: b'string_with_lf,string_with_crlf\n"a\nbc","a\r\nbc"\n'
New behavior on Windows:
Passing line_terminator
explicitly, set the line terminator
to that character.
In [1]: data = pd.DataFrame({"string_with_lf": ["a\nbc"],
...: "string_with_crlf": ["a\r\nbc"]})
In [2]: data.to_csv("test.csv", index=False, line_terminator='\n')
In [3]: with open("test.csv", mode='rb') as f:
...: print(f.read())
Out[3]: b'string_with_lf,string_with_crlf\n"a\nbc","a\r\nbc"\n'
On Windows, the value of os.linesep
is '\r\n'
, so if line_terminator
is not
set, '\r\n'
is used for line terminator.
In [1]: data = pd.DataFrame({"string_with_lf": ["a\nbc"],
...: "string_with_crlf": ["a\r\nbc"]})
In [2]: data.to_csv("test.csv", index=False)
In [3]: with open("test.csv", mode='rb') as f:
...: print(f.read())
Out[3]: b'string_with_lf,string_with_crlf\r\n"a\nbc","a\r\nbc"\r\n'
For file objects, specifying newline
is not sufficient to set the line terminator.
You must pass in the line_terminator
explicitly, even in this case.
In [1]: data = pd.DataFrame({"string_with_lf": ["a\nbc"],
...: "string_with_crlf": ["a\r\nbc"]})
In [2]: with open("test2.csv", mode='w', newline='\n') as f:
...: data.to_csv(f, index=False)
In [3]: with open("test2.csv", mode='rb') as f:
...: print(f.read())
Out[3]: b'string_with_lf,string_with_crlf\r\n"a\nbc","a\r\nbc"\r\n'
Proper handling of np.NaN
in a string data-typed column with the Python engine#
There was bug in read_excel()
and read_csv()
with the Python
engine, where missing values turned to 'nan'
with dtype=str
and
na_filter=True
. Now, these missing values are converted to the string
missing indicator, np.nan
. (GH20377)
Previous behavior:
In [5]: data = 'a,b,c\n1,,3\n4,5,6'
In [6]: df = pd.read_csv(StringIO(data), engine='python', dtype=str, na_filter=True)
In [7]: df.loc[0, 'b']
Out[7]:
'nan'
New behavior:
In [53]: data = 'a,b,c\n1,,3\n4,5,6'
In [54]: df = pd.read_csv(StringIO(data), engine='python', dtype=str, na_filter=True)
In [55]: df.loc[0, 'b']
Out[55]: nan
Notice how we now instead output np.nan
itself instead of a stringified form of it.
Parsing datetime strings with timezone offsets#
Previously, parsing datetime strings with UTC offsets with to_datetime()
or DatetimeIndex
would automatically convert the datetime to UTC
without timezone localization. This is inconsistent from parsing the same
datetime string with Timestamp
which would preserve the UTC
offset in the tz
attribute. Now, to_datetime()
preserves the UTC
offset in the tz
attribute when all the datetime strings have the same
UTC offset (GH17697, GH11736, GH22457)
Previous behavior:
In [2]: pd.to_datetime("2015-11-18 15:30:00+05:30")
Out[2]: Timestamp('2015-11-18 10:00:00')
In [3]: pd.Timestamp("2015-11-18 15:30:00+05:30")
Out[3]: Timestamp('2015-11-18 15:30:00+0530', tz='pytz.FixedOffset(330)')
# Different UTC offsets would automatically convert the datetimes to UTC (without a UTC timezone)
In [4]: pd.to_datetime(["2015-11-18 15:30:00+05:30", "2015-11-18 16:30:00+06:30"])
Out[4]: DatetimeIndex(['2015-11-18 10:00:00', '2015-11-18 10:00:00'], dtype='datetime64[ns]', freq=None)
New behavior:
In [56]: pd.to_datetime("2015-11-18 15:30:00+05:30")
Out[56]: Timestamp('2015-11-18 15:30:00+0530', tz='UTC+05:30')
In [57]: pd.Timestamp("2015-11-18 15:30:00+05:30")
Out[57]: Timestamp('2015-11-18 15:30:00+0530', tz='UTC+05:30')
Parsing datetime strings with the same UTC offset will preserve the UTC offset in the tz
In [58]: pd.to_datetime(["2015-11-18 15:30:00+05:30"] * 2)
Out[58]: DatetimeIndex(['2015-11-18 15:30:00+05:30', '2015-11-18 15:30:00+05:30'], dtype='datetime64[ns, UTC+05:30]', freq=None)
Parsing datetime strings with different UTC offsets will now create an Index of
datetime.datetime
objects with different UTC offsets
In [59]: idx = pd.to_datetime(["2015-11-18 15:30:00+05:30",
....: "2015-11-18 16:30:00+06:30"])
....:
In [60]: idx
Out[60]: Index([2015-11-18 15:30:00+05:30, 2015-11-18 16:30:00+06:30], dtype='object')
In [61]: idx[0]
Out[61]: Timestamp('2015-11-18 15:30:00+0530', tz='UTC+05:30')
In [62]: idx[1]
Out[62]: Timestamp('2015-11-18 16:30:00+0630', tz='UTC+06:30')
Passing utc=True
will mimic the previous behavior but will correctly indicate
that the dates have been converted to UTC
In [63]: pd.to_datetime(["2015-11-18 15:30:00+05:30",
....: "2015-11-18 16:30:00+06:30"], utc=True)
....:
Out[63]: DatetimeIndex(['2015-11-18 10:00:00+00:00', '2015-11-18 10:00:00+00:00'], dtype='datetime64[ns, UTC]', freq=None)
Parsing mixed-timezones with read_csv()
#
read_csv()
no longer silently converts mixed-timezone columns to UTC (GH24987).
Previous behavior
>>> import io
>>> content = """\
... a
... 2000-01-01T00:00:00+05:00
... 2000-01-01T00:00:00+06:00"""
>>> df = pd.read_csv(io.StringIO(content), parse_dates=['a'])
>>> df.a
0 1999-12-31 19:00:00
1 1999-12-31 18:00:00
Name: a, dtype: datetime64[ns]
New behavior
In [64]: import io
In [65]: content = """\
....: a
....: 2000-01-01T00:00:00+05:00
....: 2000-01-01T00:00:00+06:00"""
....:
In [66]: df = pd.read_csv(io.StringIO(content), parse_dates=['a'])
In [67]: df.a
Out[67]:
0 2000-01-01 00:00:00+05:00
1 2000-01-01 00:00:00+06:00
Name: a, Length: 2, dtype: object
As can be seen, the dtype
is object; each value in the column is a string.
To convert the strings to an array of datetimes, the date_parser
argument
In [3]: df = pd.read_csv(
...: io.StringIO(content),
...: parse_dates=['a'],
...: date_parser=lambda col: pd.to_datetime(col, utc=True),
...: )
In [4]: df.a
Out[4]:
0 1999-12-31 19:00:00+00:00
1 1999-12-31 18:00:00+00:00
Name: a, dtype: datetime64[ns, UTC]
See Parsing datetime strings with timezone offsets for more.
Time values in dt.end_time
and to_timestamp(how='end')
#
The time values in Period
and PeriodIndex
objects are now set
to ‘23:59:59.999999999’ when calling Series.dt.end_time
, Period.end_time
,
PeriodIndex.end_time
, Period.to_timestamp()
with how='end'
,
or PeriodIndex.to_timestamp()
with how='end'
(GH17157)
Previous behavior:
In [2]: p = pd.Period('2017-01-01', 'D')
In [3]: pi = pd.PeriodIndex([p])
In [4]: pd.Series(pi).dt.end_time[0]
Out[4]: Timestamp(2017-01-01 00:00:00)
In [5]: p.end_time
Out[5]: Timestamp(2017-01-01 23:59:59.999999999)
New behavior:
Calling Series.dt.end_time
will now result in a time of ‘23:59:59.999999999’ as
is the case with Period.end_time
, for example
In [68]: p = pd.Period('2017-01-01', 'D')
In [69]: pi = pd.PeriodIndex([p])
In [70]: pd.Series(pi).dt.end_time[0]
Out[70]: Timestamp('2017-01-01 23:59:59.999999999')
In [71]: p.end_time
Out[71]: Timestamp('2017-01-01 23:59:59.999999999')
Series.unique for timezone-aware data#
The return type of Series.unique()
for datetime with timezone values has changed
from an numpy.ndarray
of Timestamp
objects to a arrays.DatetimeArray
(GH24024).
In [72]: ser = pd.Series([pd.Timestamp('2000', tz='UTC'),
....: pd.Timestamp('2000', tz='UTC')])
....:
Previous behavior:
In [3]: ser.unique()
Out[3]: array([Timestamp('2000-01-01 00:00:00+0000', tz='UTC')], dtype=object)
New behavior:
In [73]: ser.unique()
Out[73]:
<DatetimeArray>
['2000-01-01 00:00:00+00:00']
Length: 1, dtype: datetime64[ns, UTC]
Sparse data structure refactor#
SparseArray
, the array backing SparseSeries
and the columns in a SparseDataFrame
,
is now an extension array (GH21978, GH19056, GH22835).
To conform to this interface and for consistency with the rest of pandas, some API breaking
changes were made:
SparseArray
is no longer a subclass ofnumpy.ndarray
. To convert aSparseArray
to a NumPy array, usenumpy.asarray()
.SparseArray.dtype
andSparseSeries.dtype
are now instances ofSparseDtype
, rather thannp.dtype
. Access the underlying dtype withSparseDtype.subtype
.numpy.asarray(sparse_array)
now returns a dense array with all the values, not just the non-fill-value values (GH14167)SparseArray.take
now matches the API ofpandas.api.extensions.ExtensionArray.take()
(GH19506):The default value of
allow_fill
has changed fromFalse
toTrue
.The
out
andmode
parameters are now longer accepted (previously, this raised if they were specified).Passing a scalar for
indices
is no longer allowed.
The result of
concat()
with a mix of sparse and dense Series is a Series with sparse values, rather than aSparseSeries
.SparseDataFrame.combine
andDataFrame.combine_first
no longer supports combining a sparse column with a dense column while preserving the sparse subtype. The result will be an object-dtype SparseArray.Setting
SparseArray.fill_value
to a fill value with a different dtype is now allowed.DataFrame[column]
is now aSeries
with sparse values, rather than aSparseSeries
, when slicing a single column with sparse values (GH23559).The result of
Series.where()
is now aSeries
with sparse values, like with other extension arrays (GH24077)
Some new warnings are issued for operations that require or are likely to materialize a large dense array:
A
errors.PerformanceWarning
is issued when using fillna with amethod
, as a dense array is constructed to create the filled array. Filling with avalue
is the efficient way to fill a sparse array.A
errors.PerformanceWarning
is now issued when concatenating sparse Series with differing fill values. The fill value from the first sparse array continues to be used.
In addition to these API breaking changes, many Performance Improvements and Bug Fixes have been made.
Finally, a Series.sparse
accessor was added to provide sparse-specific methods like Series.sparse.from_coo()
.
In [74]: s = pd.Series([0, 0, 1, 1, 1], dtype='Sparse[int]')
In [75]: s.sparse.density
Out[75]: 0.6
get_dummies()
always returns a DataFrame#
Previously, when sparse=True
was passed to get_dummies()
, the return value could be either
a DataFrame
or a SparseDataFrame
, depending on whether all or a just a subset
of the columns were dummy-encoded. Now, a DataFrame
is always returned (GH24284).
Previous behavior
The first get_dummies()
returns a DataFrame
because the column A
is not dummy encoded. When just ["B", "C"]
are passed to get_dummies
,
then all the columns are dummy-encoded, and a SparseDataFrame
was returned.
In [2]: df = pd.DataFrame({"A": [1, 2], "B": ['a', 'b'], "C": ['a', 'a']})
In [3]: type(pd.get_dummies(df, sparse=True))
Out[3]: pandas.core.frame.DataFrame
In [4]: type(pd.get_dummies(df[['B', 'C']], sparse=True))
Out[4]: pandas.core.sparse.frame.SparseDataFrame
New behavior
Now, the return type is consistently a DataFrame
.
In [76]: type(pd.get_dummies(df, sparse=True))
Out[76]: pandas.core.frame.DataFrame
In [77]: type(pd.get_dummies(df[['B', 'C']], sparse=True))
Out[77]: pandas.core.frame.DataFrame
Note
There’s no difference in memory usage between a SparseDataFrame
and a DataFrame
with sparse values. The memory usage will
be the same as in the previous version of pandas.
Raise ValueError in DataFrame.to_dict(orient='index')
#
Bug in DataFrame.to_dict()
raises ValueError
when used with
orient='index'
and a non-unique index instead of losing data (GH22801)
In [78]: df = pd.DataFrame({'a': [1, 2], 'b': [0.5, 0.75]}, index=['A', 'A'])
In [79]: df
Out[79]:
a b
A 1 0.50
A 2 0.75
[2 rows x 2 columns]
In [80]: df.to_dict(orient='index')
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
Cell In[80], line 1
----> 1 df.to_dict(orient='index')
File ~/work/pandas/pandas/pandas/core/frame.py:1988, in DataFrame.to_dict(self, orient, into, index)
1886 """
1887 Convert the DataFrame to a dictionary.
1888
(...)
1984 defaultdict(<class 'list'>, {'col1': 2, 'col2': 0.75})]
1985 """
1986 from pandas.core.methods.to_dict import to_dict
-> 1988 return to_dict(self, orient, into, index)
File ~/work/pandas/pandas/pandas/core/methods/to_dict.py:177, in to_dict(df, orient, into, index)
175 elif orient == "index":
176 if not df.index.is_unique:
--> 177 raise ValueError("DataFrame index must be unique for orient='index'.")
178 columns = df.columns.tolist()
179 if are_all_object_dtype_cols:
ValueError: DataFrame index must be unique for orient='index'.
Tick DateOffset normalize restrictions#
Creating a Tick
object (Day
, Hour
, Minute
,
Second
, Milli
, Micro
, Nano
) with
normalize=True
is no longer supported. This prevents unexpected behavior
where addition could fail to be monotone or associative. (GH21427)
Previous behavior:
In [2]: ts = pd.Timestamp('2018-06-11 18:01:14')
In [3]: ts
Out[3]: Timestamp('2018-06-11 18:01:14')
In [4]: tic = pd.offsets.Hour(n=2, normalize=True)
...:
In [5]: tic
Out[5]: <2 * Hours>
In [6]: ts + tic
Out[6]: Timestamp('2018-06-11 00:00:00')
In [7]: ts + tic + tic + tic == ts + (tic + tic + tic)
Out[7]: False
New behavior:
In [81]: ts = pd.Timestamp('2018-06-11 18:01:14')
In [82]: tic = pd.offsets.Hour(n=2)
In [83]: ts + tic + tic + tic == ts + (tic + tic + tic)
Out[83]: True
Period subtraction#
Subtraction of a Period
from another Period
will give a DateOffset
.
instead of an integer (GH21314)
Previous behavior:
In [2]: june = pd.Period('June 2018')
In [3]: april = pd.Period('April 2018')
In [4]: june - april
Out [4]: 2
New behavior:
In [84]: june = pd.Period('June 2018')
In [85]: april = pd.Period('April 2018')
In [86]: june - april
Out[86]: <2 * MonthEnds>
Similarly, subtraction of a Period
from a PeriodIndex
will now return
an Index
of DateOffset
objects instead of an Int64Index
Previous behavior:
In [2]: pi = pd.period_range('June 2018', freq='M', periods=3)
In [3]: pi - pi[0]
Out[3]: Int64Index([0, 1, 2], dtype='int64')
New behavior:
In [87]: pi = pd.period_range('June 2018', freq='M', periods=3)
In [88]: pi - pi[0]
Out[88]: Index([<0 * MonthEnds>, <MonthEnd>, <2 * MonthEnds>], dtype='object')
Addition/subtraction of NaN
from DataFrame
#
Adding or subtracting NaN
from a DataFrame
column with
timedelta64[ns]
dtype will now raise a TypeError
instead of returning
all-NaT
. This is for compatibility with TimedeltaIndex
and
Series
behavior (GH22163)
In [89]: df = pd.DataFrame([pd.Timedelta(days=1)])
In [90]: df
Out[90]:
0
0 1 days
[1 rows x 1 columns]
Previous behavior:
In [4]: df = pd.DataFrame([pd.Timedelta(days=1)])
In [5]: df - np.nan
Out[5]:
0
0 NaT
New behavior:
In [2]: df - np.nan
...
TypeError: unsupported operand type(s) for -: 'TimedeltaIndex' and 'float'
DataFrame comparison operations broadcasting changes#
Previously, the broadcasting behavior of DataFrame
comparison
operations (==
, !=
, …) was inconsistent with the behavior of
arithmetic operations (+
, -
, …). The behavior of the comparison
operations has been changed to match the arithmetic operations in these cases.
(GH22880)
The affected cases are:
operating against a 2-dimensional
np.ndarray
with either 1 row or 1 column will now broadcast the same way anp.ndarray
would (GH23000).a list or tuple with length matching the number of rows in the
DataFrame
will now raiseValueError
instead of operating column-by-column (GH22880.a list or tuple with length matching the number of columns in the
DataFrame
will now operate row-by-row instead of raisingValueError
(GH22880).
In [91]: arr = np.arange(6).reshape(3, 2)
In [92]: df = pd.DataFrame(arr)
In [93]: df
Out[93]:
0 1
0 0 1
1 2 3
2 4 5
[3 rows x 2 columns]
Previous behavior:
In [5]: df == arr[[0], :]
...: # comparison previously broadcast where arithmetic would raise
Out[5]:
0 1
0 True True
1 False False
2 False False
In [6]: df + arr[[0], :]
...
ValueError: Unable to coerce to DataFrame, shape must be (3, 2): given (1, 2)
In [7]: df == (1, 2)
...: # length matches number of columns;
...: # comparison previously raised where arithmetic would broadcast
...
ValueError: Invalid broadcasting comparison [(1, 2)] with block values
In [8]: df + (1, 2)
Out[8]:
0 1
0 1 3
1 3 5
2 5 7
In [9]: df == (1, 2, 3)
...: # length matches number of rows
...: # comparison previously broadcast where arithmetic would raise
Out[9]:
0 1
0 False True
1 True False
2 False False
In [10]: df + (1, 2, 3)
...
ValueError: Unable to coerce to Series, length must be 2: given 3
New behavior:
# Comparison operations and arithmetic operations both broadcast.
In [94]: df == arr[[0], :]
Out[94]:
0 1
0 True True
1 False False
2 False False
[3 rows x 2 columns]
In [95]: df + arr[[0], :]
Out[95]:
0 1
0 0 2
1 2 4
2 4 6
[3 rows x 2 columns]
# Comparison operations and arithmetic operations both broadcast.
In [96]: df == (1, 2)
Out[96]:
0 1
0 False False
1 False False
2 False False
[3 rows x 2 columns]
In [97]: df + (1, 2)
Out[97]:
0 1
0 1 3
1 3 5
2 5 7
[3 rows x 2 columns]
# Comparison operations and arithmetic operations both raise ValueError.
In [6]: df == (1, 2, 3)
...
ValueError: Unable to coerce to Series, length must be 2: given 3
In [7]: df + (1, 2, 3)
...
ValueError: Unable to coerce to Series, length must be 2: given 3
DataFrame arithmetic operations broadcasting changes#
DataFrame
arithmetic operations when operating with 2-dimensional
np.ndarray
objects now broadcast in the same way as np.ndarray
broadcast. (GH23000)
In [98]: arr = np.arange(6).reshape(3, 2)
In [99]: df = pd.DataFrame(arr)
In [100]: df
Out[100]:
0 1
0 0 1
1 2 3
2 4 5
[3 rows x 2 columns]
Previous behavior:
In [5]: df + arr[[0], :] # 1 row, 2 columns
...
ValueError: Unable to coerce to DataFrame, shape must be (3, 2): given (1, 2)
In [6]: df + arr[:, [1]] # 1 column, 3 rows
...
ValueError: Unable to coerce to DataFrame, shape must be (3, 2): given (3, 1)
New behavior:
In [101]: df + arr[[0], :] # 1 row, 2 columns
Out[101]:
0 1
0 0 2
1 2 4
2 4 6
[3 rows x 2 columns]
In [102]: df + arr[:, [1]] # 1 column, 3 rows
Out[102]:
0 1
0 1 2
1 5 6
2 9 10
[3 rows x 2 columns]
Series and Index data-dtype incompatibilities#
Series
and Index
constructors now raise when the
data is incompatible with a passed dtype=
(GH15832)
Previous behavior:
In [4]: pd.Series([-1], dtype="uint64")
Out [4]:
0 18446744073709551615
dtype: uint64
New behavior:
In [4]: pd.Series([-1], dtype="uint64")
Out [4]:
...
OverflowError: Trying to coerce negative values to unsigned integers
Concatenation changes#
Calling pandas.concat()
on a Categorical
of ints with NA values now
causes them to be processed as objects when concatenating with anything
other than another Categorical
of ints (GH19214)
In [103]: s = pd.Series([0, 1, np.nan])
In [104]: c = pd.Series([0, 1, np.nan], dtype="category")
Previous behavior
In [3]: pd.concat([s, c])
Out[3]:
0 0.0
1 1.0
2 NaN
0 0.0
1 1.0
2 NaN
dtype: float64
New behavior
In [105]: pd.concat([s, c])
Out[105]:
0 0.0
1 1.0
2 NaN
0 0.0
1 1.0
2 NaN
Length: 6, dtype: float64
Datetimelike API changes#
For
DatetimeIndex
andTimedeltaIndex
with non-None
freq
attribute, addition or subtraction of integer-dtyped array orIndex
will return an object of the same class (GH19959)DateOffset
objects are now immutable. Attempting to alter one of these will now raiseAttributeError
(GH21341)PeriodIndex
subtraction of anotherPeriodIndex
will now return an object-dtypeIndex
ofDateOffset
objects instead of raising aTypeError
(GH20049)cut()
andqcut()
now returns aDatetimeIndex
orTimedeltaIndex
bins when the input is datetime or timedelta dtype respectively andretbins=True
(GH19891)DatetimeIndex.to_period()
andTimestamp.to_period()
will issue a warning when timezone information will be lost (GH21333)PeriodIndex.tz_convert()
andPeriodIndex.tz_localize()
have been removed (GH21781)
Other API changes#
A newly constructed empty
DataFrame
with integer as thedtype
will now only be cast tofloat64
ifindex
is specified (GH22858)Series.str.cat()
will now raise ifothers
is aset
(GH23009)Passing scalar values to
DatetimeIndex
orTimedeltaIndex
will now raiseTypeError
instead ofValueError
(GH23539)max_rows
andmax_cols
parameters removed fromHTMLFormatter
since truncation is handled byDataFrameFormatter
(GH23818)read_csv()
will now raise aValueError
if a column with missing values is declared as having dtypebool
(GH20591)The column order of the resultant
DataFrame
fromMultiIndex.to_frame()
is now guaranteed to match theMultiIndex.names
order. (GH22420)Incorrectly passing a
DatetimeIndex
toMultiIndex.from_tuples()
, rather than a sequence of tuples, now raises aTypeError
rather than aValueError
(GH24024)pd.offsets.generate_range()
argumenttime_rule
has been removed; useoffset
instead (GH24157)In 0.23.x, pandas would raise a
ValueError
on a merge of a numeric column (e.g.int
dtyped column) and anobject
dtyped column (GH9780). We have re-enabled the ability to mergeobject
and other dtypes; pandas will still raise on a merge between a numeric and anobject
dtyped column that is composed only of strings (GH21681)Accessing a level of a
MultiIndex
with a duplicate name (e.g. inget_level_values()
) now raises aValueError
instead of aKeyError
(GH21678).Invalid construction of
IntervalDtype
will now always raise aTypeError
rather than aValueError
if the subdtype is invalid (GH21185)Trying to reindex a
DataFrame
with a non uniqueMultiIndex
now raises aValueError
instead of anException
(GH21770)Index
subtraction will attempt to operate element-wise instead of raisingTypeError
(GH19369)pandas.io.formats.style.Styler
supports anumber-format
property when usingto_excel()
(GH22015)DataFrame.corr()
andSeries.corr()
now raise aValueError
along with a helpful error message instead of aKeyError
when supplied with an invalid method (GH22298)shift()
will now always return a copy, instead of the previous behaviour of returning self when shifting by 0 (GH22397)DataFrame.set_index()
now gives a better (and less frequent) KeyError, raises aValueError
for incorrect types, and will not fail on duplicate column names withdrop=True
. (GH22484)Slicing a single row of a DataFrame with multiple ExtensionArrays of the same type now preserves the dtype, rather than coercing to object (GH22784)
DateOffset
attribute_cacheable
and method_should_cache
have been removed (GH23118)Series.searchsorted()
, when supplied a scalar value to search for, now returns a scalar instead of an array (GH23801).Categorical.searchsorted()
, when supplied a scalar value to search for, now returns a scalar instead of an array (GH23466).Categorical.searchsorted()
now raises aKeyError
rather that aValueError
, if a searched for key is not found in its categories (GH23466).Index.hasnans()
andSeries.hasnans()
now always return a python boolean. Previously, a python or a numpy boolean could be returned, depending on circumstances (GH23294).The order of the arguments of
DataFrame.to_html()
andDataFrame.to_string()
is rearranged to be consistent with each other. (GH23614)CategoricalIndex.reindex()
now raises aValueError
if the target index is non-unique and not equal to the current index. It previously only raised if the target index was not of a categorical dtype (GH23963).Series.to_list()
andIndex.to_list()
are now aliases ofSeries.tolist
respectivelyIndex.tolist
(GH8826)The result of
SparseSeries.unstack
is now aDataFrame
with sparse values, rather than aSparseDataFrame
(GH24372).DatetimeIndex
andTimedeltaIndex
no longer ignore the dtype precision. Passing a non-nanosecond resolution dtype will raise aValueError
(GH24753)
Extension type changes#
Equality and hashability
pandas now requires that extension dtypes be hashable (i.e. the respective
ExtensionDtype
objects; hashability is not a requirement for the values
of the corresponding ExtensionArray
). The base class implements
a default __eq__
and __hash__
. If you have a parametrized dtype, you should
update the ExtensionDtype._metadata
tuple to match the signature of your
__init__
method. See pandas.api.extensions.ExtensionDtype
for more (GH22476).
New and changed methods
dropna()
has been added (GH21185)repeat()
has been added (GH24349)The
ExtensionArray
constructor,_from_sequence
now take the keyword argcopy=False
(GH21185)pandas.api.extensions.ExtensionArray.shift()
added as part of the basicExtensionArray
interface (GH22387).searchsorted()
has been added (GH24350)Support for reduction operations such as
sum
,mean
via opt-in base class method override (GH22762)ExtensionArray.isna()
is allowed to return anExtensionArray
(GH22325).
Dtype changes
ExtensionDtype
has gained the ability to instantiate from string dtypes, e.g.decimal
would instantiate a registeredDecimalDtype
; furthermore theExtensionDtype
has gained the methodconstruct_array_type
(GH21185)Added
ExtensionDtype._is_numeric
for controlling whether an extension dtype is considered numeric (GH22290).Added
pandas.api.types.register_extension_dtype()
to register an extension type with pandas (GH22664)Updated the
.type
attribute forPeriodDtype
,DatetimeTZDtype
, andIntervalDtype
to be instances of the dtype (Period
,Timestamp
, andInterval
respectively) (GH22938)
Operator support
A Series
based on an ExtensionArray
now supports arithmetic and comparison
operators (GH19577). There are two approaches for providing operator support for an ExtensionArray
:
Define each of the operators on your
ExtensionArray
subclass.Use an operator implementation from pandas that depends on operators that are already defined on the underlying elements (scalars) of the
ExtensionArray
.
See the ExtensionArray Operator Support documentation section for details on both ways of adding operator support.
Other changes
A default repr for
pandas.api.extensions.ExtensionArray
is now provided (GH23601).ExtensionArray._formatting_values()
is deprecated. UseExtensionArray._formatter
instead. (GH23601)An
ExtensionArray
with a boolean dtype now works correctly as a boolean indexer.pandas.api.types.is_bool_dtype()
now properly considers them boolean (GH22326)
Bug fixes
Bug in
Series.get()
forSeries
usingExtensionArray
and integer index (GH21257)Series.combine()
works correctly withExtensionArray
inside ofSeries
(GH20825)Series.combine()
with scalar argument now works for any function type (GH21248)Series.astype()
andDataFrame.astype()
now dispatch toExtensionArray.astype()
(GH21185).Slicing a single row of a
DataFrame
with multiple ExtensionArrays of the same type now preserves the dtype, rather than coercing to object (GH22784)Bug when concatenating multiple
Series
with different extension dtypes not casting to object dtype (GH22994)Series backed by an
ExtensionArray
now work withutil.hash_pandas_object()
(GH23066)DataFrame.stack()
no longer converts to object dtype for DataFrames where each column has the same extension dtype. The output Series will have the same dtype as the columns (GH23077).Series.unstack()
andDataFrame.unstack()
no longer convert extension arrays to object-dtype ndarrays. Each column in the outputDataFrame
will now have the same dtype as the input (GH23077).Bug when grouping
Dataframe.groupby()
and aggregating onExtensionArray
it was not returning the actualExtensionArray
dtype (GH23227).Bug in
pandas.merge()
when merging on an extension array-backed column (GH23020).
Deprecations#
MultiIndex.labels
has been deprecated and replaced byMultiIndex.codes
. The functionality is unchanged. The new name better reflects the natures of these codes and makes theMultiIndex
API more similar to the API forCategoricalIndex
(GH13443). As a consequence, other uses of the namelabels
inMultiIndex
have also been deprecated and replaced withcodes
:You should initialize a
MultiIndex
instance using a parameter namedcodes
rather thanlabels
.MultiIndex.set_labels
has been deprecated in favor ofMultiIndex.set_codes()
.For method
MultiIndex.copy()
, thelabels
parameter has been deprecated and replaced by acodes
parameter.
DataFrame.to_stata()
,read_stata()
,StataReader
andStataWriter
have deprecated theencoding
argument. The encoding of a Stata dta file is determined by the file type and cannot be changed (GH21244)MultiIndex.to_hierarchical()
is deprecated and will be removed in a future version (GH21613)Series.ptp()
is deprecated. Usenumpy.ptp
instead (GH21614)Series.compress()
is deprecated. UseSeries[condition]
instead (GH18262)The signature of
Series.to_csv()
has been uniformed to that ofDataFrame.to_csv()
: the name of the first argument is nowpath_or_buf
, the order of subsequent arguments has changed, theheader
argument now defaults toTrue
. (GH19715)Categorical.from_codes()
has deprecated providing float values for thecodes
argument. (GH21767)pandas.read_table()
is deprecated. Instead, useread_csv()
passingsep='\t'
if necessary. This deprecation has been removed in 0.25.0. (GH21948)Series.str.cat()
has deprecated using arbitrary list-likes within list-likes. A list-like container may still contain manySeries
,Index
or 1-dimensionalnp.ndarray
, or alternatively, only scalar values. (GH21950)FrozenNDArray.searchsorted()
has deprecated thev
parameter in favor ofvalue
(GH14645)DatetimeIndex.shift()
andPeriodIndex.shift()
now acceptperiods
argument instead ofn
for consistency withIndex.shift()
andSeries.shift()
. Usingn
throws a deprecation warning (GH22458, GH22912)The
fastpath
keyword of the different Index constructors is deprecated (GH23110).Timestamp.tz_localize()
,DatetimeIndex.tz_localize()
, andSeries.tz_localize()
have deprecated theerrors
argument in favor of thenonexistent
argument (GH8917)The class
FrozenNDArray
has been deprecated. When unpickling,FrozenNDArray
will be unpickled tonp.ndarray
once this class is removed (GH9031)The methods
DataFrame.update()
andPanel.update()
have deprecated theraise_conflict=False|True
keyword in favor oferrors='ignore'|'raise'
(GH23585)The methods
Series.str.partition()
andSeries.str.rpartition()
have deprecated thepat
keyword in favor ofsep
(GH22676)Deprecated the
nthreads
keyword ofpandas.read_feather()
in favor ofuse_threads
to reflect the changes inpyarrow>=0.11.0
. (GH23053)pandas.read_excel()
has deprecated acceptingusecols
as an integer. Please pass in a list of ints from 0 tousecols
inclusive instead (GH23527)Constructing a
TimedeltaIndex
from data withdatetime64
-dtyped data is deprecated, will raiseTypeError
in a future version (GH23539)Constructing a
DatetimeIndex
from data withtimedelta64
-dtyped data is deprecated, will raiseTypeError
in a future version (GH23675)The
keep_tz=False
option (the default) of thekeep_tz
keyword ofDatetimeIndex.to_series()
is deprecated (GH17832).Timezone converting a tz-aware
datetime.datetime
orTimestamp
withTimestamp
and thetz
argument is now deprecated. Instead, useTimestamp.tz_convert()
(GH23579)pandas.api.types.is_period()
is deprecated in favor ofpandas.api.types.is_period_dtype
(GH23917)pandas.api.types.is_datetimetz()
is deprecated in favor ofpandas.api.types.is_datetime64tz
(GH23917)Creating a
TimedeltaIndex
,DatetimeIndex
, orPeriodIndex
by passing range argumentsstart
,end
, andperiods
is deprecated in favor oftimedelta_range()
,date_range()
, orperiod_range()
(GH23919)Passing a string alias like
'datetime64[ns, UTC]'
as theunit
parameter toDatetimeTZDtype
is deprecated. UseDatetimeTZDtype.construct_from_string
instead (GH23990).The
skipna
parameter ofinfer_dtype()
will switch toTrue
by default in a future version of pandas (GH17066, GH24050)In
Series.where()
with Categorical data, providing another
that is not present in the categories is deprecated. Convert the categorical to a different dtype or add theother
to the categories first (GH24077).Series.clip_lower()
,Series.clip_upper()
,DataFrame.clip_lower()
andDataFrame.clip_upper()
are deprecated and will be removed in a future version. UseSeries.clip(lower=threshold)
,Series.clip(upper=threshold)
and the equivalentDataFrame
methods (GH24203)Series.nonzero()
is deprecated and will be removed in a future version (GH18262)Passing an integer to
Series.fillna()
andDataFrame.fillna()
withtimedelta64[ns]
dtypes is deprecated, will raiseTypeError
in a future version. Useobj.fillna(pd.Timedelta(...))
instead (GH24694)Series.cat.categorical
,Series.cat.name
andSeries.cat.index
have been deprecated. Use the attributes onSeries.cat
orSeries
directly. (GH24751).Passing a dtype without a precision like
np.dtype('datetime64')
ortimedelta64
toIndex
,DatetimeIndex
andTimedeltaIndex
is now deprecated. Use the nanosecond-precision dtype instead (GH24753).
Integer addition/subtraction with datetimes and timedeltas is deprecated#
In the past, users could—in some cases—add or subtract integers or integer-dtype
arrays from Timestamp
, DatetimeIndex
and TimedeltaIndex
.
This usage is now deprecated. Instead add or subtract integer multiples of
the object’s freq
attribute (GH21939, GH23878).
Previous behavior:
In [5]: ts = pd.Timestamp('1994-05-06 12:15:16', freq=pd.offsets.Hour())
In [6]: ts + 2
Out[6]: Timestamp('1994-05-06 14:15:16', freq='H')
In [7]: tdi = pd.timedelta_range('1D', periods=2)
In [8]: tdi - np.array([2, 1])
Out[8]: TimedeltaIndex(['-1 days', '1 days'], dtype='timedelta64[ns]', freq=None)
In [9]: dti = pd.date_range('2001-01-01', periods=2, freq='7D')
In [10]: dti + pd.Index([1, 2])
Out[10]: DatetimeIndex(['2001-01-08', '2001-01-22'], dtype='datetime64[ns]', freq=None)
New behavior:
In [106]: ts = pd.Timestamp('1994-05-06 12:15:16', freq=pd.offsets.Hour())
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
Cell In[106], line 1
----> 1 ts = pd.Timestamp('1994-05-06 12:15:16', freq=pd.offsets.Hour())
File ~/work/pandas/pandas/pandas/_libs/tslibs/timestamps.pyx:1523, in pandas._libs.tslibs.timestamps.Timestamp.__new__()
TypeError: __new__() got an unexpected keyword argument 'freq'
In [107]: ts + 2 * ts.freq
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
Cell In[107], line 1
----> 1 ts + 2 * ts.freq
AttributeError: 'Timestamp' object has no attribute 'freq'
In [108]: tdi = pd.timedelta_range('1D', periods=2)
In [109]: tdi - np.array([2 * tdi.freq, 1 * tdi.freq])
Out[109]: Index([-1 days +00:00:00, 1 days 00:00:00], dtype='object')
In [110]: dti = pd.date_range('2001-01-01', periods=2, freq='7D')
In [111]: dti + pd.Index([1 * dti.freq, 2 * dti.freq])
Out[111]: Index([2001-01-08 00:00:00, 2001-01-22 00:00:00], dtype='object')
Passing integer data and a timezone to DatetimeIndex#
The behavior of DatetimeIndex
when passed integer data and
a timezone is changing in a future version of pandas. Previously, these
were interpreted as wall times in the desired timezone. In the future,
these will be interpreted as wall times in UTC, which are then converted
to the desired timezone (GH24559).
The default behavior remains the same, but issues a warning:
In [3]: pd.DatetimeIndex([946684800000000000], tz="US/Central")
/bin/ipython:1: FutureWarning:
Passing integer-dtype data and a timezone to DatetimeIndex. Integer values
will be interpreted differently in a future version of pandas. Previously,
these were viewed as datetime64[ns] values representing the wall time
*in the specified timezone*. In the future, these will be viewed as
datetime64[ns] values representing the wall time *in UTC*. This is similar
to a nanosecond-precision UNIX epoch. To accept the future behavior, use
pd.to_datetime(integer_data, utc=True).tz_convert(tz)
To keep the previous behavior, use
pd.to_datetime(integer_data).tz_localize(tz)
#!/bin/python3
Out[3]: DatetimeIndex(['2000-01-01 00:00:00-06:00'], dtype='datetime64[ns, US/Central]', freq=None)
As the warning message explains, opt in to the future behavior by specifying that the integer values are UTC, and then converting to the final timezone:
In [112]: pd.to_datetime([946684800000000000], utc=True).tz_convert('US/Central')
Out[112]: DatetimeIndex(['1999-12-31 18:00:00-06:00'], dtype='datetime64[ns, US/Central]', freq=None)
The old behavior can be retained with by localizing directly to the final timezone:
In [113]: pd.to_datetime([946684800000000000]).tz_localize('US/Central')
Out[113]: DatetimeIndex(['2000-01-01 00:00:00-06:00'], dtype='datetime64[ns, US/Central]', freq=None)
Converting timezone-aware Series and Index to NumPy arrays#
The conversion from a Series
or Index
with timezone-aware
datetime data will change to preserve timezones by default (GH23569).
NumPy doesn’t have a dedicated dtype for timezone-aware datetimes.
In the past, converting a Series
or DatetimeIndex
with
timezone-aware datatimes would convert to a NumPy array by
converting the tz-aware data to UTC
dropping the timezone-info
returning a
numpy.ndarray
withdatetime64[ns]
dtype
Future versions of pandas will preserve the timezone information by returning an
object-dtype NumPy array where each value is a Timestamp
with the correct
timezone attached
In [114]: ser = pd.Series(pd.date_range('2000', periods=2, tz="CET"))
In [115]: ser
Out[115]:
0 2000-01-01 00:00:00+01:00
1 2000-01-02 00:00:00+01:00
Length: 2, dtype: datetime64[ns, CET]
The default behavior remains the same, but issues a warning
In [8]: np.asarray(ser)
/bin/ipython:1: FutureWarning: Converting timezone-aware DatetimeArray to timezone-naive
ndarray with 'datetime64[ns]' dtype. In the future, this will return an ndarray
with 'object' dtype where each element is a 'pandas.Timestamp' with the correct 'tz'.
To accept the future behavior, pass 'dtype=object'.
To keep the old behavior, pass 'dtype="datetime64[ns]"'.
#!/bin/python3
Out[8]:
array(['1999-12-31T23:00:00.000000000', '2000-01-01T23:00:00.000000000'],
dtype='datetime64[ns]')
The previous or future behavior can be obtained, without any warnings, by specifying
the dtype
Previous behavior
In [116]: np.asarray(ser, dtype='datetime64[ns]')
Out[116]:
array(['1999-12-31T23:00:00.000000000', '2000-01-01T23:00:00.000000000'],
dtype='datetime64[ns]')
Future behavior
# New behavior
In [117]: np.asarray(ser, dtype=object)
Out[117]:
array([Timestamp('2000-01-01 00:00:00+0100', tz='CET'),
Timestamp('2000-01-02 00:00:00+0100', tz='CET')], dtype=object)
Or by using Series.to_numpy()
In [118]: ser.to_numpy()
Out[118]:
array([Timestamp('2000-01-01 00:00:00+0100', tz='CET'),
Timestamp('2000-01-02 00:00:00+0100', tz='CET')], dtype=object)
In [119]: ser.to_numpy(dtype="datetime64[ns]")
Out[119]:
array(['1999-12-31T23:00:00.000000000', '2000-01-01T23:00:00.000000000'],
dtype='datetime64[ns]')
All the above applies to a DatetimeIndex
with tz-aware values as well.
Removal of prior version deprecations/changes#
The
LongPanel
andWidePanel
classes have been removed (GH10892)Series.repeat()
has renamed thereps
argument torepeats
(GH14645)Several private functions were removed from the (non-public) module
pandas.core.common
(GH22001)Removal of the previously deprecated module
pandas.core.datetools
(GH14105, GH14094)Strings passed into
DataFrame.groupby()
that refer to both column and index levels will raise aValueError
(GH14432)Index.repeat()
andMultiIndex.repeat()
have renamed then
argument torepeats
(GH14645)The
Series
constructor and.astype
method will now raise aValueError
if timestamp dtypes are passed in without a unit (e.g.np.datetime64
) for thedtype
parameter (GH15987)Removal of the previously deprecated
as_indexer
keyword completely fromstr.match()
(GH22356, GH6581)The modules
pandas.types
,pandas.computation
, andpandas.util.decorators
have been removed (GH16157, GH16250)Removed the
pandas.formats.style
shim forpandas.io.formats.style.Styler
(GH16059)pandas.pnow
,pandas.match
,pandas.groupby
,pd.get_store
,pd.Expr
, andpd.Term
have been removed (GH15538, GH15940)Categorical.searchsorted()
andSeries.searchsorted()
have renamed thev
argument tovalue
(GH14645)pandas.parser
,pandas.lib
, andpandas.tslib
have been removed (GH15537)Index.searchsorted()
have renamed thekey
argument tovalue
(GH14645)DataFrame.consolidate
andSeries.consolidate
have been removed (GH15501)Removal of the previously deprecated module
pandas.json
(GH19944)SparseArray.get_values()
andSparseArray.to_dense()
have dropped thefill
parameter (GH14686)DataFrame.sortlevel
andSeries.sortlevel
have been removed (GH15099)SparseSeries.to_dense()
has dropped thesparse_only
parameter (GH14686)DataFrame.astype()
andSeries.astype()
have renamed theraise_on_error
argument toerrors
(GH14967)is_sequence
,is_any_int_dtype
, andis_floating_dtype
have been removed frompandas.api.types
(GH16163, GH16189)
Performance improvements#
Slicing Series and DataFrames with an monotonically increasing
CategoricalIndex
is now very fast and has speed comparable to slicing with anInt64Index
. The speed increase is both when indexing by label (using .loc) and position(.iloc) (GH20395) Slicing a monotonically increasingCategoricalIndex
itself (i.e.ci[1000:2000]
) shows similar speed improvements as above (GH21659)Improved performance of
CategoricalIndex.equals()
when comparing to anotherCategoricalIndex
(GH24023)Improved performance of
Series.describe()
in case of numeric dtpyes (GH21274)Improved performance of
pandas.core.groupby.GroupBy.rank()
when dealing with tied rankings (GH21237)Improved performance of
DataFrame.set_index()
with columns consisting ofPeriod
objects (GH21582, GH21606)Improved performance of
Series.at()
andIndex.get_value()
for Extension Arrays values (e.g.Categorical
) (GH24204)Improved performance of membership checks in
Categorical
andCategoricalIndex
(i.e.x in cat
-style checks are much faster).CategoricalIndex.contains()
is likewise much faster (GH21369, GH21508)Improved performance of
HDFStore.groups()
(and dependent functions likeHDFStore.keys()
. (i.e.x in store
checks are much faster) (GH21372)Improved the performance of
pandas.get_dummies()
withsparse=True
(GH21997)Improved performance of
IndexEngine.get_indexer_non_unique()
for sorted, non-unique indexes (GH9466)Improved performance of
PeriodIndex.unique()
(GH23083)Improved performance of
concat()
forSeries
objects (GH23404)Improved performance of
DatetimeIndex.normalize()
andTimestamp.normalize()
for timezone naive or UTC datetimes (GH23634)Improved performance of
DatetimeIndex.tz_localize()
and variousDatetimeIndex
attributes with dateutil UTC timezone (GH23772)Fixed a performance regression on Windows with Python 3.7 of
read_csv()
(GH23516)Improved performance of
Categorical
constructor forSeries
objects (GH23814)Improved performance of
where()
for Categorical data (GH24077)Improved performance of iterating over a
Series
. UsingDataFrame.itertuples()
now creates iterators without internally allocating lists of all elements (GH20783)Improved performance of
Period
constructor, additionally benefittingPeriodArray
andPeriodIndex
creation (GH24084, GH24118)Improved performance of tz-aware
DatetimeArray
binary operations (GH24491)
Bug fixes#
Categorical#
Bug in
Categorical.from_codes()
whereNaN
values incodes
were silently converted to0
(GH21767). In the future this will raise aValueError
. Also changes the behavior of.from_codes([1.1, 2.0])
.Bug in
Categorical.sort_values()
whereNaN
values were always positioned in front regardless ofna_position
value. (GH22556).Bug when indexing with a boolean-valued
Categorical
. Now a boolean-valuedCategorical
is treated as a boolean mask (GH22665)Constructing a
CategoricalIndex
with empty values and boolean categories was raising aValueError
after a change to dtype coercion (GH22702).Bug in
Categorical.take()
with a user-providedfill_value
not encoding thefill_value
, which could result in aValueError
, incorrect results, or a segmentation fault (GH23296).In
Series.unstack()
, specifying afill_value
not present in the categories now raises aTypeError
rather than ignoring thefill_value
(GH23284)Bug when resampling
DataFrame.resample()
and aggregating on categorical data, the categorical dtype was getting lost. (GH23227)Bug in many methods of the
.str
-accessor, which always failed on calling theCategoricalIndex.str
constructor (GH23555, GH23556)Bug in
Series.where()
losing the categorical dtype for categorical data (GH24077)Bug in
Categorical.apply()
whereNaN
values could be handled unpredictably. They now remain unchanged (GH24241)Bug in
Categorical
comparison methods incorrectly raisingValueError
when operating against aDataFrame
(GH24630)Bug in
Categorical.set_categories()
where setting fewer new categories withrename=True
caused a segmentation fault (GH24675)
Datetimelike#
Fixed bug where two
DateOffset
objects with differentnormalize
attributes could evaluate as equal (GH21404)Fixed bug where
Timestamp.resolution()
incorrectly returned 1-microsecondtimedelta
instead of 1-nanosecondTimedelta
(GH21336, GH21365)Bug in
to_datetime()
that did not consistently return anIndex
whenbox=True
was specified (GH21864)Bug in
DatetimeIndex
comparisons where string comparisons incorrectly raisesTypeError
(GH22074)Bug in
DatetimeIndex
comparisons when comparing againsttimedelta64[ns]
dtyped arrays; in some casesTypeError
was incorrectly raised, in others it incorrectly failed to raise (GH22074)Bug in
DatetimeIndex
comparisons when comparing against object-dtyped arrays (GH22074)Bug in
DataFrame
withdatetime64[ns]
dtype addition and subtraction withTimedelta
-like objects (GH22005, GH22163)Bug in
DataFrame
withdatetime64[ns]
dtype addition and subtraction withDateOffset
objects returning anobject
dtype instead ofdatetime64[ns]
dtype (GH21610, GH22163)Bug in
DataFrame
withdatetime64[ns]
dtype comparing againstNaT
incorrectly (GH22242, GH22163)Bug in
DataFrame
withdatetime64[ns]
dtype subtractingTimestamp
-like object incorrectly returneddatetime64[ns]
dtype instead oftimedelta64[ns]
dtype (GH8554, GH22163)Bug in
DataFrame
withdatetime64[ns]
dtype subtractingnp.datetime64
object with non-nanosecond unit failing to convert to nanoseconds (GH18874, GH22163)Bug in
DataFrame
comparisons againstTimestamp
-like objects failing to raiseTypeError
for inequality checks with mismatched types (GH8932, GH22163)Bug in
DataFrame
with mixed dtypes includingdatetime64[ns]
incorrectly raisingTypeError
on equality comparisons (GH13128, GH22163)Bug in
DataFrame.values
returning aDatetimeIndex
for a single-columnDataFrame
with tz-aware datetime values. Now a 2-Dnumpy.ndarray
ofTimestamp
objects is returned (GH24024)Bug in
DataFrame.eq()
comparison againstNaT
incorrectly returningTrue
orNaN
(GH15697, GH22163)Bug in
DatetimeIndex
subtraction that incorrectly failed to raiseOverflowError
(GH22492, GH22508)Bug in
DatetimeIndex
incorrectly allowing indexing withTimedelta
object (GH20464)Bug in
DatetimeIndex
where frequency was being set if original frequency wasNone
(GH22150)Bug in rounding methods of
DatetimeIndex
(round()
,ceil()
,floor()
) andTimestamp
(round()
,ceil()
,floor()
) could give rise to loss of precision (GH22591)Bug in
to_datetime()
with anIndex
argument that would drop thename
from the result (GH21697)Bug in
PeriodIndex
where adding or subtracting atimedelta
orTick
object produced incorrect results (GH22988)Bug in the
Series
repr with period-dtype data missing a space before the data (GH23601)Bug in
date_range()
when decrementing a start date to a past end date by a negative frequency (GH23270)Bug in
Series.min()
which would returnNaN
instead ofNaT
when called on a series ofNaT
(GH23282)Bug in
Series.combine_first()
not properly aligning categoricals, so that missing values inself
where not filled by valid values fromother
(GH24147)Bug in
DataFrame.combine()
with datetimelike values raising a TypeError (GH23079)Bug in
date_range()
with frequency ofDay
or higher where dates sufficiently far in the future could wrap around to the past instead of raisingOutOfBoundsDatetime
(GH14187)Bug in
period_range()
ignoring the frequency ofstart
andend
when those are provided asPeriod
objects (GH20535).Bug in
PeriodIndex
with attributefreq.n
greater than 1 where adding aDateOffset
object would return incorrect results (GH23215)Bug in
Series
that interpreted string indices as lists of characters when setting datetimelike values (GH23451)Bug in
DataFrame
when creating a new column from an ndarray ofTimestamp
objects with timezones creating an object-dtype column, rather than datetime with timezone (GH23932)Bug in
Timestamp
constructor which would drop the frequency of an inputTimestamp
(GH22311)Bug in
DatetimeIndex
where callingnp.array(dtindex, dtype=object)
would incorrectly return an array oflong
objects (GH23524)Bug in
Index
where passing a timezone-awareDatetimeIndex
anddtype=object
would incorrectly raise aValueError
(GH23524)Bug in
Index
where callingnp.array(dtindex, dtype=object)
on a timezone-naiveDatetimeIndex
would return an array ofdatetime
objects instead ofTimestamp
objects, potentially losing nanosecond portions of the timestamps (GH23524)Bug in
Categorical.__setitem__
not allowing setting with anotherCategorical
when both are unordered and have the same categories, but in a different order (GH24142)Bug in
date_range()
where using dates with millisecond resolution or higher could return incorrect values or the wrong number of values in the index (GH24110)Bug in
DatetimeIndex
where constructing aDatetimeIndex
from aCategorical
orCategoricalIndex
would incorrectly drop timezone information (GH18664)Bug in
DatetimeIndex
andTimedeltaIndex
where indexing withEllipsis
would incorrectly lose the index’sfreq
attribute (GH21282)Clarified error message produced when passing an incorrect
freq
argument toDatetimeIndex
withNaT
as the first entry in the passed data (GH11587)Bug in
to_datetime()
wherebox
andutc
arguments were ignored when passing aDataFrame
ordict
of unit mappings (GH23760)Bug in
Series.dt
where the cache would not update properly after an in-place operation (GH24408)Bug in
PeriodIndex
where comparisons against an array-like object with length 1 failed to raiseValueError
(GH23078)Bug in
DatetimeIndex.astype()
,PeriodIndex.astype()
andTimedeltaIndex.astype()
ignoring the sign of thedtype
for unsigned integer dtypes (GH24405).Fixed bug in
Series.max()
withdatetime64[ns]
-dtype failing to returnNaT
when nulls are present andskipna=False
is passed (GH24265)Bug in
to_datetime()
where arrays ofdatetime
objects containing both timezone-aware and timezone-naivedatetimes
would fail to raiseValueError
(GH24569)Bug in
to_datetime()
with invalid datetime format doesn’t coerce input toNaT
even iferrors='coerce'
(GH24763)
Timedelta#
Bug in
DataFrame
withtimedelta64[ns]
dtype division byTimedelta
-like scalar incorrectly returningtimedelta64[ns]
dtype instead offloat64
dtype (GH20088, GH22163)Bug in adding a
Index
with object dtype to aSeries
withtimedelta64[ns]
dtype incorrectly raising (GH22390)Bug in multiplying a
Series
with numeric dtype against atimedelta
object (GH22390)Bug in
Series
with numeric dtype when adding or subtracting an array orSeries
withtimedelta64
dtype (GH22390)Bug in
Index
with numeric dtype when multiplying or dividing an array with dtypetimedelta64
(GH22390)Bug in
TimedeltaIndex
incorrectly allowing indexing withTimestamp
object (GH20464)Fixed bug where subtracting
Timedelta
from an object-dtyped array would raiseTypeError
(GH21980)Fixed bug in adding a
DataFrame
with all-timedelta64[ns] dtypes to aDataFrame
with all-integer dtypes returning incorrect results instead of raisingTypeError
(GH22696)Bug in
TimedeltaIndex
where adding a timezone-aware datetime scalar incorrectly returned a timezone-naiveDatetimeIndex
(GH23215)Bug in
TimedeltaIndex
where addingnp.timedelta64('NaT')
incorrectly returned an all-NaT
DatetimeIndex
instead of an all-NaT
TimedeltaIndex
(GH23215)Bug in
Timedelta
andto_timedelta()
have inconsistencies in supported unit string (GH21762)Bug in
TimedeltaIndex
division where dividing by anotherTimedeltaIndex
raisedTypeError
instead of returning aFloat64Index
(GH23829, GH22631)Bug in
TimedeltaIndex
comparison operations where comparing against non-Timedelta
-like objects would raiseTypeError
instead of returning all-False
for__eq__
and all-True
for__ne__
(GH24056)Bug in
Timedelta
comparisons when comparing with aTick
object incorrectly raisingTypeError
(GH24710)
Timezones#
Bug in
Index.shift()
where anAssertionError
would raise when shifting across DST (GH8616)Bug in
Timestamp
constructor where passing an invalid timezone offset designator (Z
) would not raise aValueError
(GH8910)Bug in
Timestamp.replace()
where replacing at a DST boundary would retain an incorrect offset (GH7825)Bug in
Series.replace()
withdatetime64[ns, tz]
data when replacingNaT
(GH11792)Bug in
Timestamp
when passing different string date formats with a timezone offset would produce different timezone offsets (GH12064)Bug when comparing a tz-naive
Timestamp
to a tz-awareDatetimeIndex
which would coerce theDatetimeIndex
to tz-naive (GH12601)Bug in
Series.truncate()
with a tz-awareDatetimeIndex
which would cause a core dump (GH9243)Bug in
Series
constructor which would coerce tz-aware and tz-naiveTimestamp
to tz-aware (GH13051)Bug in
Index
withdatetime64[ns, tz]
dtype that did not localize integer data correctly (GH20964)Bug in
DatetimeIndex
where constructing with an integer and tz would not localize correctly (GH12619)Fixed bug where
DataFrame.describe()
andSeries.describe()
on tz-aware datetimes did not showfirst
andlast
result (GH21328)Bug in
DatetimeIndex
comparisons failing to raiseTypeError
when comparing timezone-awareDatetimeIndex
againstnp.datetime64
(GH22074)Bug in
DataFrame
assignment with a timezone-aware scalar (GH19843)Bug in
DataFrame.asof()
that raised aTypeError
when attempting to compare tz-naive and tz-aware timestamps (GH21194)Bug when constructing a
DatetimeIndex
withTimestamp
constructed with thereplace
method across DST (GH18785)Bug when setting a new value with
DataFrame.loc()
with aDatetimeIndex
with a DST transition (GH18308, GH20724)Bug in
Index.unique()
that did not re-localize tz-aware dates correctly (GH21737)Bug in
DataFrame.resample()
andSeries.resample()
where anAmbiguousTimeError
orNonExistentTimeError
would raise if a timezone aware timeseries ended on a DST transition (GH19375, GH10117)Bug in
DataFrame.drop()
andSeries.drop()
when specifying a tz-aware Timestamp key to drop from aDatetimeIndex
with a DST transition (GH21761)Bug in
DatetimeIndex
constructor whereNaT
anddateutil.tz.tzlocal
would raise anOutOfBoundsDatetime
error (GH23807)Bug in
DatetimeIndex.tz_localize()
andTimestamp.tz_localize()
withdateutil.tz.tzlocal
near a DST transition that would return an incorrectly localized datetime (GH23807)Bug in
Timestamp
constructor where adateutil.tz.tzutc
timezone passed with adatetime.datetime
argument would be converted to apytz.UTC
timezone (GH23807)Bug in
to_datetime()
whereutc=True
was not respected when specifying aunit
anderrors='ignore'
(GH23758)Bug in
to_datetime()
whereutc=True
was not respected when passing aTimestamp
(GH24415)Bug in
DataFrame.any()
returns wrong value whenaxis=1
and the data is of datetimelike type (GH23070)Bug in
DatetimeIndex.to_period()
where a timezone aware index was converted to UTC first before creatingPeriodIndex
(GH22905)Bug in
DataFrame.tz_localize()
,DataFrame.tz_convert()
,Series.tz_localize()
, andSeries.tz_convert()
wherecopy=False
would mutate the original argument inplace (GH6326)Bug in
DataFrame.max()
andDataFrame.min()
withaxis=1
where aSeries
withNaN
would be returned when all columns contained the same timezone (GH10390)
Offsets#
Bug in
FY5253
where date offsets could incorrectly raise anAssertionError
in arithmetic operations (GH14774)Bug in
DateOffset
where keyword argumentsweek
andmilliseconds
were accepted and ignored. Passing these will now raiseValueError
(GH19398)Bug in adding
DateOffset
withDataFrame
orPeriodIndex
incorrectly raisingTypeError
(GH23215)Bug in comparing
DateOffset
objects with non-DateOffset objects, particularly strings, raisingValueError
instead of returningFalse
for equality checks andTrue
for not-equal checks (GH23524)
Numeric#
Bug in
Series
__rmatmul__
doesn’t support matrix vector multiplication (GH21530)Bug in
factorize()
fails with read-only array (GH12813)Fixed bug in
unique()
handled signed zeros inconsistently: for some inputs 0.0 and -0.0 were treated as equal and for some inputs as different. Now they are treated as equal for all inputs (GH21866)Bug in
DataFrame.agg()
,DataFrame.transform()
andDataFrame.apply()
where, when supplied with a list of functions andaxis=1
(e.g.df.apply(['sum', 'mean'], axis=1)
), aTypeError
was wrongly raised. For all three methods such calculation are now done correctly. (GH16679).Bug in
Series
comparison against datetime-like scalars and arrays (GH22074)Bug in
DataFrame
multiplication between boolean dtype and integer returningobject
dtype instead of integer dtype (GH22047, GH22163)Bug in
DataFrame.apply()
where, when supplied with a string argument and additional positional or keyword arguments (e.g.df.apply('sum', min_count=1)
), aTypeError
was wrongly raised (GH22376)Bug in
DataFrame.astype()
to extension dtype may raiseAttributeError
(GH22578)Bug in
DataFrame
withtimedelta64[ns]
dtype arithmetic operations withndarray
with integer dtype incorrectly treating the narray astimedelta64[ns]
dtype (GH23114)Bug in
Series.rpow()
with object dtypeNaN
for1 ** NA
instead of1
(GH22922).Series.agg()
can now handle numpy NaN-aware methods likenumpy.nansum()
(GH19629)Bug in
Series.rank()
andDataFrame.rank()
whenpct=True
and more than 224 rows are present resulted in percentages greater than 1.0 (GH18271)Calls such as
DataFrame.round()
with a non-uniqueCategoricalIndex()
now return expected data. Previously, data would be improperly duplicated (GH21809).Added
log10
,floor
andceil
to the list of supported functions inDataFrame.eval()
(GH24139, GH24353)Logical operations
&, |, ^
betweenSeries
andIndex
will no longer raiseValueError
(GH22092)Checking PEP 3141 numbers in
is_scalar()
function returnsTrue
(GH22903)Reduction methods like
Series.sum()
now accept the default value ofkeepdims=False
when called from a NumPy ufunc, rather than raising aTypeError
. Full support forkeepdims
has not been implemented (GH24356).
Conversion#
Bug in
DataFrame.combine_first()
in which column types were unexpectedly converted to float (GH20699)Bug in
DataFrame.clip()
in which column types are not preserved and casted to float (GH24162)Bug in
DataFrame.clip()
when order of columns of dataframes doesn’t match, result observed is wrong in numeric values (GH20911)Bug in
DataFrame.astype()
where converting to an extension dtype when duplicate column names are present causes aRecursionError
(GH24704)
Strings#
Bug in
Index.str.partition()
was not nan-safe (GH23558).Bug in
Index.str.split()
was not nan-safe (GH23677).Bug
Series.str.contains()
not respecting thena
argument for aCategorical
dtypeSeries
(GH22158)Bug in
Index.str.cat()
when the result contained onlyNaN
(GH24044)
Interval#
Bug in the
IntervalIndex
constructor where theclosed
parameter did not always override the inferredclosed
(GH19370)Bug in the
IntervalIndex
repr where a trailing comma was missing after the list of intervals (GH20611)Bug in
Interval
where scalar arithmetic operations did not retain theclosed
value (GH22313)Bug in
IntervalIndex
where indexing with datetime-like values raised aKeyError
(GH20636)Bug in
IntervalTree
where data containingNaN
triggered a warning and resulted in incorrect indexing queries withIntervalIndex
(GH23352)
Indexing#
Bug in
DataFrame.ne()
fails if columns contain column name “dtype” (GH22383)The traceback from a
KeyError
when asking.loc
for a single missing label is now shorter and more clear (GH21557)PeriodIndex
now emits aKeyError
when a malformed string is looked up, which is consistent with the behavior ofDatetimeIndex
(GH22803)When
.ix
is asked for a missing integer label in aMultiIndex
with a first level of integer type, it now raises aKeyError
, consistently with the case of a flatInt64Index
, rather than falling back to positional indexing (GH21593)Bug in
Index.reindex()
when reindexing a tz-naive and tz-awareDatetimeIndex
(GH8306)Bug in
Series.reindex()
when reindexing an empty series with adatetime64[ns, tz]
dtype (GH20869)Bug in
DataFrame
when setting values with.loc
and a timezone awareDatetimeIndex
(GH11365)DataFrame.__getitem__
now accepts dictionaries and dictionary keys as list-likes of labels, consistently withSeries.__getitem__
(GH21294)Fixed
DataFrame[np.nan]
when columns are non-unique (GH21428)Bug when indexing
DatetimeIndex
with nanosecond resolution dates and timezones (GH11679)Bug where indexing with a Numpy array containing negative values would mutate the indexer (GH21867)
Bug where mixed indexes wouldn’t allow integers for
.at
(GH19860)Float64Index.get_loc
now raisesKeyError
when boolean key passed. (GH19087)Bug in
DataFrame.loc()
when indexing with anIntervalIndex
(GH19977)Index
no longer manglesNone
,NaN
andNaT
, i.e. they are treated as three different keys. However, for numeric Index all three are still coerced to aNaN
(GH22332)Bug in
scalar in Index
if scalar is a float while theIndex
is of integer dtype (GH22085)Bug in
MultiIndex.set_levels()
when levels value is not subscriptable (GH23273)Bug where setting a timedelta column by
Index
causes it to be casted to double, and therefore lose precision (GH23511)Bug in
Index.union()
andIndex.intersection()
where name of theIndex
of the result was not computed correctly for certain cases (GH9943, GH9862)Bug in
Index
slicing with booleanIndex
may raiseTypeError
(GH22533)Bug in
PeriodArray.__setitem__
when accepting slice and list-like value (GH23978)Bug in
DatetimeIndex
,TimedeltaIndex
where indexing withEllipsis
would lose theirfreq
attribute (GH21282)Bug in
iat
where using it to assign an incompatible value would create a new column (GH23236)
Missing#
Bug in
DataFrame.fillna()
where aValueError
would raise when one column contained adatetime64[ns, tz]
dtype (GH15522)Bug in
Series.hasnans()
that could be incorrectly cached and return incorrect answers if null elements are introduced after an initial call (GH19700)Series.isin()
now treats all NaN-floats as equal also fornp.object_
-dtype. This behavior is consistent with the behavior for float64 (GH22119)unique()
no longer mangles NaN-floats and theNaT
-object fornp.object_
-dtype, i.e.NaT
is no longer coerced to a NaN-value and is treated as a different entity. (GH22295)DataFrame
andSeries
now properly handle numpy masked arrays with hardened masks. Previously, constructing a DataFrame or Series from a masked array with a hard mask would create a pandas object containing the underlying value, rather than the expected NaN. (GH24574)Bug in
DataFrame
constructor wheredtype
argument was not honored when handling numpy masked record arrays. (GH24874)
MultiIndex#
Bug in
io.formats.style.Styler.applymap()
wheresubset=
withMultiIndex
slice would reduce toSeries
(GH19861)Removed compatibility for
MultiIndex
pickles prior to version 0.8.0; compatibility withMultiIndex
pickles from version 0.13 forward is maintained (GH21654)MultiIndex.get_loc_level()
(and as a consequence,.loc
on aSeries
orDataFrame
with aMultiIndex
index) will now raise aKeyError
, rather than returning an emptyslice
, if asked a label which is present in thelevels
but is unused (GH22221)MultiIndex
has gained theMultiIndex.from_frame()
, it allows constructing aMultiIndex
object from aDataFrame
(GH22420)Fix
TypeError
in Python 3 when creatingMultiIndex
in which some levels have mixed types, e.g. when some labels are tuples (GH15457)
IO#
Bug in
read_csv()
in which a column specified withCategoricalDtype
of boolean categories was not being correctly coerced from string values to booleans (GH20498)Bug in
read_csv()
in which unicode column names were not being properly recognized with Python 2.x (GH13253)Bug in
DataFrame.to_sql()
when writing timezone aware data (datetime64[ns, tz]
dtype) would raise aTypeError
(GH9086)Bug in
DataFrame.to_sql()
where a naiveDatetimeIndex
would be written asTIMESTAMP WITH TIMEZONE
type in supported databases, e.g. PostgreSQL (GH23510)Bug in
read_excel()
whenparse_cols
is specified with an empty dataset (GH9208)read_html()
no longer ignores all-whitespace<tr>
within<thead>
when considering theskiprows
andheader
arguments. Previously, users had to decrease theirheader
andskiprows
values on such tables to work around the issue. (GH21641)read_excel()
will correctly show the deprecation warning for previously deprecatedsheetname
(GH17994)read_csv()
andread_table()
will throwUnicodeError
and not coredump on badly encoded strings (GH22748)read_csv()
will correctly parse timezone-aware datetimes (GH22256)Bug in
read_csv()
in which memory management was prematurely optimized for the C engine when the data was being read in chunks (GH23509)Bug in
read_csv()
in unnamed columns were being improperly identified when extracting a multi-index (GH23687)read_sas()
will parse numbers in sas7bdat-files that have width less than 8 bytes correctly. (GH21616)read_sas()
will correctly parse sas7bdat files with many columns (GH22628)read_sas()
will correctly parse sas7bdat files with data page types having also bit 7 set (so page type is 128 + 256 = 384) (GH16615)Bug in
read_sas()
in which an incorrect error was raised on an invalid file format. (GH24548)Bug in
detect_client_encoding()
where potentialIOError
goes unhandled when importing in a mod_wsgi process due to restricted access to stdout. (GH21552)Bug in
DataFrame.to_html()
withindex=False
misses truncation indicators (…) on truncated DataFrame (GH15019, GH22783)Bug in
DataFrame.to_html()
withindex=False
when both columns and row index areMultiIndex
(GH22579)Bug in
DataFrame.to_html()
withindex_names=False
displaying index name (GH22747)Bug in
DataFrame.to_html()
withheader=False
not displaying row index names (GH23788)Bug in
DataFrame.to_html()
withsparsify=False
that caused it to raiseTypeError
(GH22887)Bug in
DataFrame.to_string()
that broke column alignment whenindex=False
and width of first column’s values is greater than the width of first column’s header (GH16839, GH13032)Bug in
DataFrame.to_string()
that caused representations ofDataFrame
to not take up the whole window (GH22984)Bug in
DataFrame.to_csv()
where a single level MultiIndex incorrectly wrote a tuple. Now just the value of the index is written (GH19589).HDFStore
will raiseValueError
when theformat
kwarg is passed to the constructor (GH13291)Bug in
HDFStore.append()
when appending aDataFrame
with an empty string column andmin_itemsize
< 8 (GH12242)Bug in
read_csv()
in which memory leaks occurred in the C engine when parsingNaN
values due to insufficient cleanup on completion or error (GH21353)Bug in
read_csv()
in which incorrect error messages were being raised whenskipfooter
was passed in along withnrows
,iterator
, orchunksize
(GH23711)Bug in
read_csv()
in whichMultiIndex
index names were being improperly handled in the cases when they were not provided (GH23484)Bug in
read_csv()
in which unnecessary warnings were being raised when the dialect’s values conflicted with the default arguments (GH23761)Bug in
read_html()
in which the error message was not displaying the valid flavors when an invalid one was provided (GH23549)Bug in
read_excel()
in which extraneous header names were extracted, even though none were specified (GH11733)Bug in
read_excel()
in which column names were not being properly converted to string sometimes in Python 2.x (GH23874)Bug in
read_excel()
in whichindex_col=None
was not being respected and parsing index columns anyway (GH18792, GH20480)Bug in
read_excel()
in whichusecols
was not being validated for proper column names when passed in as a string (GH20480)Bug in
DataFrame.to_dict()
when the resulting dict contains non-Python scalars in the case of numeric data (GH23753)DataFrame.to_string()
,DataFrame.to_html()
,DataFrame.to_latex()
will correctly format output when a string is passed as thefloat_format
argument (GH21625, GH22270)Bug in
read_csv()
that caused it to raiseOverflowError
when trying to use ‘inf’ asna_value
with integer index column (GH17128)Bug in
read_csv()
that caused the C engine on Python 3.6+ on Windows to improperly read CSV filenames with accented or special characters (GH15086)Bug in
read_fwf()
in which the compression type of a file was not being properly inferred (GH22199)Bug in
pandas.io.json.json_normalize()
that caused it to raiseTypeError
when two consecutive elements ofrecord_path
are dicts (GH22706)Bug in
DataFrame.to_stata()
,pandas.io.stata.StataWriter
andpandas.io.stata.StataWriter117
where a exception would leave a partially written and invalid dta file (GH23573)Bug in
DataFrame.to_stata()
andpandas.io.stata.StataWriter117
that produced invalid files when using strLs with non-ASCII characters (GH23573)Bug in
HDFStore
that caused it to raiseValueError
when reading a Dataframe in Python 3 from fixed format written in Python 2 (GH24510)Bug in
DataFrame.to_string()
and more generally in the floatingrepr
formatter. Zeros were not trimmed ifinf
was present in a columns while it was the case with NA values. Zeros are now trimmed as in the presence of NA (GH24861).Bug in the
repr
when truncating the number of columns and having a wide last column (GH24849).
Plotting#
Bug in
DataFrame.plot.scatter()
andDataFrame.plot.hexbin()
caused x-axis label and ticklabels to disappear when colorbar was on in IPython inline backend (GH10611, GH10678, and GH20455)Bug in plotting a Series with datetimes using
matplotlib.axes.Axes.scatter()
(GH22039)Bug in
DataFrame.plot.bar()
caused bars to use multiple colors instead of a single one (GH20585)Bug in validating color parameter caused extra color to be appended to the given color array. This happened to multiple plotting functions using matplotlib. (GH20726)
GroupBy/resample/rolling#
Bug in
pandas.core.window.Rolling.min()
andpandas.core.window.Rolling.max()
withclosed='left'
, a datetime-like index and only one entry in the series leading to segfault (GH24718)Bug in
pandas.core.groupby.GroupBy.first()
andpandas.core.groupby.GroupBy.last()
withas_index=False
leading to the loss of timezone information (GH15884)Bug in
DateFrame.resample()
when downsampling across a DST boundary (GH8531)Bug in date anchoring for
DateFrame.resample()
with offsetDay
when n > 1 (GH24127)Bug where
ValueError
is wrongly raised when callingcount()
method of aSeriesGroupBy
when the grouping variable only contains NaNs and numpy version < 1.13 (GH21956).Multiple bugs in
pandas.core.window.Rolling.min()
withclosed='left'
and a datetime-like index leading to incorrect results and also segfault. (GH21704)Bug in
pandas.core.resample.Resampler.apply()
when passing positional arguments to applied func (GH14615).Bug in
Series.resample()
when passingnumpy.timedelta64
toloffset
kwarg (GH7687).Bug in
pandas.core.resample.Resampler.asfreq()
when frequency ofTimedeltaIndex
is a subperiod of a new frequency (GH13022).Bug in
pandas.core.groupby.SeriesGroupBy.mean()
when values were integral but could not fit inside of int64, overflowing instead. (GH22487)pandas.core.groupby.RollingGroupby.agg()
andpandas.core.groupby.ExpandingGroupby.agg()
now support multiple aggregation functions as parameters (GH15072)Bug in
DataFrame.resample()
andSeries.resample()
when resampling by a weekly offset ('W'
) across a DST transition (GH9119, GH21459)Bug in
DataFrame.expanding()
in which theaxis
argument was not being respected during aggregations (GH23372)Bug in
pandas.core.groupby.GroupBy.transform()
which caused missing values when the input function can accept aDataFrame
but renames it (GH23455).Bug in
pandas.core.groupby.GroupBy.nth()
where column order was not always preserved (GH20760)Bug in
pandas.core.groupby.GroupBy.rank()
withmethod='dense'
andpct=True
when a group has only one member would raise aZeroDivisionError
(GH23666).Calling
pandas.core.groupby.GroupBy.rank()
with empty groups andpct=True
was raising aZeroDivisionError
(GH22519)Bug in
DataFrame.resample()
when resamplingNaT
inTimeDeltaIndex
(GH13223).Bug in
DataFrame.groupby()
did not respect theobserved
argument when selecting a column and instead always usedobserved=False
(GH23970)Bug in
pandas.core.groupby.SeriesGroupBy.pct_change()
orpandas.core.groupby.DataFrameGroupBy.pct_change()
would previously work across groups when calculating the percent change, where it now correctly works per group (GH21200, GH21235).Bug preventing hash table creation with very large number (2^32) of rows (GH22805)
Bug in groupby when grouping on categorical causes
ValueError
and incorrect grouping ifobserved=True
andnan
is present in categorical column (GH24740, GH21151).
Reshaping#
Bug in
pandas.concat()
when joining resampled DataFrames with timezone aware index (GH13783)Bug in
pandas.concat()
when joining onlySeries
thenames
argument ofconcat
is no longer ignored (GH23490)Bug in
Series.combine_first()
withdatetime64[ns, tz]
dtype which would return tz-naive result (GH21469)Bug in
Series.where()
andDataFrame.where()
withdatetime64[ns, tz]
dtype (GH21546)Bug in
DataFrame.where()
with an empty DataFrame and emptycond
having non-bool dtype (GH21947)Bug in
Series.mask()
andDataFrame.mask()
withlist
conditionals (GH21891)Bug in
DataFrame.replace()
raises RecursionError when converting OutOfBoundsdatetime64[ns, tz]
(GH20380)pandas.core.groupby.GroupBy.rank()
now raises aValueError
when an invalid value is passed for argumentna_option
(GH22124)Bug in
get_dummies()
with Unicode attributes in Python 2 (GH22084)Bug in
DataFrame.replace()
raisesRecursionError
when replacing empty lists (GH22083)Bug in
Series.replace()
andDataFrame.replace()
when dict is used as theto_replace
value and one key in the dict is another key’s value, the results were inconsistent between using integer key and using string key (GH20656)Bug in
DataFrame.drop_duplicates()
for emptyDataFrame
which incorrectly raises an error (GH20516)Bug in
pandas.wide_to_long()
when a string is passed to the stubnames argument and a column name is a substring of that stubname (GH22468)Bug in
merge()
when mergingdatetime64[ns, tz]
data that contained a DST transition (GH18885)Bug in
merge_asof()
when merging on float values within defined tolerance (GH22981)Bug in
pandas.concat()
when concatenating a multicolumn DataFrame with tz-aware data against a DataFrame with a different number of columns (GH22796)Bug in
merge_asof()
where confusing error message raised when attempting to merge with missing values (GH23189)Bug in
DataFrame.nsmallest()
andDataFrame.nlargest()
for dataframes that have aMultiIndex
for columns (GH23033).Bug in
pandas.melt()
when passing column names that are not present inDataFrame
(GH23575)Bug in
DataFrame.append()
with aSeries
with a dateutil timezone would raise aTypeError
(GH23682)Bug in
Series
construction when passing no data anddtype=str
(GH22477)Bug in
cut()
withbins
as an overlappingIntervalIndex
where multiple bins were returned per item instead of raising aValueError
(GH23980)Bug in
pandas.concat()
when joiningSeries
datetimetz withSeries
category would lose timezone (GH23816)Bug in
DataFrame.join()
when joining on partial MultiIndex would drop names (GH20452).DataFrame.nlargest()
andDataFrame.nsmallest()
now returns the correct n values when keep != ‘all’ also when tied on the first columns (GH22752)Constructing a DataFrame with an index argument that wasn’t already an instance of
Index
was broken (GH22227).Bug in
DataFrame
prevented list subclasses to be used to construction (GH21226)Bug in
DataFrame.unstack()
andDataFrame.pivot_table()
returning a misleading error message when the resulting DataFrame has more elements than int32 can handle. Now, the error message is improved, pointing towards the actual problem (GH20601)Bug in
DataFrame.unstack()
where aValueError
was raised when unstacking timezone aware values (GH18338)Bug in
DataFrame.stack()
where timezone aware values were converted to timezone naive values (GH19420)Bug in
merge_asof()
where aTypeError
was raised whenby_col
were timezone aware values (GH21184)Bug showing an incorrect shape when throwing error during
DataFrame
construction. (GH20742)
Sparse#
Updating a boolean, datetime, or timedelta column to be Sparse now works (GH22367)
Bug in
Series.to_sparse()
with Series already holding sparse data not constructing properly (GH22389)Providing a
sparse_index
to the SparseArray constructor no longer defaults the na-value tonp.nan
for all dtypes. The correct na_value fordata.dtype
is now used.Bug in
SparseArray.nbytes
under-reporting its memory usage by not including the size of its sparse index.Improved performance of
Series.shift()
for non-NAfill_value
, as values are no longer converted to a dense array.Bug in
DataFrame.groupby
not includingfill_value
in the groups for non-NAfill_value
when grouping by a sparse column (GH5078)Bug in unary inversion operator (
~
) on aSparseSeries
with boolean values. The performance of this has also been improved (GH22835)Bug in
SparseArary.unique()
not returning the unique values (GH19595)Bug in
SparseArray.nonzero()
andSparseDataFrame.dropna()
returning shifted/incorrect results (GH21172)Bug in
DataFrame.apply()
where dtypes would lose sparseness (GH23744)Bug in
concat()
when concatenating a list ofSeries
with all-sparse values changing thefill_value
and converting to a dense Series (GH24371)
Style#
background_gradient()
now takes atext_color_threshold
parameter to automatically lighten the text color based on the luminance of the background color. This improves readability with dark background colors without the need to limit the background colormap range. (GH21258)background_gradient()
now also supports tablewise application (in addition to rowwise and columnwise) withaxis=None
(GH15204)bar()
now also supports tablewise application (in addition to rowwise and columnwise) withaxis=None
and setting clipping range withvmin
andvmax
(GH21548 and GH21526).NaN
values are also handled properly.
Build changes#
Building pandas for development now requires
cython >= 0.28.2
(GH21688)Testing pandas now requires
hypothesis>=3.58
. You can find the Hypothesis docs here, and a pandas-specific introduction in the contributing guide. (GH22280)Building pandas on macOS now targets minimum macOS 10.9 if run on macOS 10.9 or above (GH23424)
Other#
Bug where C variables were declared with external linkage causing import errors if certain other C libraries were imported before pandas. (GH24113)
Contributors#
A total of 337 people contributed patches to this release. People with a “+” by their names contributed a patch for the first time.
AJ Dyka +
AJ Pryor, Ph.D +
Aaron Critchley
Adam Hooper
Adam J. Stewart
Adam Kim
Adam Klimont +
Addison Lynch +
Alan Hogue +
Alex Radu +
Alex Rychyk
Alex Strick van Linschoten +
Alex Volkov +
Alexander Buchkovsky
Alexander Hess +
Alexander Ponomaroff +
Allison Browne +
Aly Sivji
Andrew
Andrew Gross +
Andrew Spott +
Andy +
Aniket uttam +
Anjali2019 +
Anjana S +
Antti Kaihola +
Anudeep Tubati +
Arjun Sharma +
Armin Varshokar
Artem Bogachev
ArtinSarraf +
Barry Fitzgerald +
Bart Aelterman +
Ben James +
Ben Nelson +
Benjamin Grove +
Benjamin Rowell +
Benoit Paquet +
Boris Lau +
Brett Naul
Brian Choi +
C.A.M. Gerlach +
Carl Johan +
Chalmer Lowe
Chang She
Charles David +
Cheuk Ting Ho
Chris
Chris Roberts +
Christopher Whelan
Chu Qing Hao +
Da Cheezy Mobsta +
Damini Satya
Daniel Himmelstein
Daniel Saxton +
Darcy Meyer +
DataOmbudsman
David Arcos
David Krych
Dean Langsam +
Diego Argueta +
Diego Torres +
Dobatymo +
Doug Latornell +
Dr. Irv
Dylan Dmitri Gray +
Eric Boxer +
Eric Chea
Erik +
Erik Nilsson +
Fabian Haase +
Fabian Retkowski
Fabien Aulaire +
Fakabbir Amin +
Fei Phoon +
Fernando Margueirat +
Florian Müller +
Fábio Rosado +
Gabe Fernando
Gabriel Reid +
Giftlin Rajaiah
Gioia Ballin +
Gjelt
Gosuke Shibahara +
Graham Inggs
Guillaume Gay
Guillaume Lemaitre +
Hannah Ferchland
Haochen Wu
Hubert +
HubertKl +
HyunTruth +
Iain Barr
Ignacio Vergara Kausel +
Irv Lustig +
IsvenC +
Jacopo Rota
Jakob Jarmar +
James Bourbeau +
James Myatt +
James Winegar +
Jan Rudolph
Jared Groves +
Jason Kiley +
Javad Noorbakhsh +
Jay Offerdahl +
Jeff Reback
Jeongmin Yu +
Jeremy Schendel
Jerod Estapa +
Jesper Dramsch +
Jim Jeon +
Joe Jevnik
Joel Nothman
Joel Ostblom +
Jordi Contestí
Jorge López Fueyo +
Joris Van den Bossche
Jose Quinones +
Jose Rivera-Rubio +
Josh
Jun +
Justin Zheng +
Kaiqi Dong +
Kalyan Gokhale
Kang Yoosam +
Karl Dunkle Werner +
Karmanya Aggarwal +
Kevin Markham +
Kevin Sheppard
Kimi Li +
Koustav Samaddar +
Krishna +
Kristian Holsheimer +
Ksenia Gueletina +
Kyle Prestel +
LJ +
LeakedMemory +
Li Jin +
Licht Takeuchi
Luca Donini +
Luciano Viola +
Mak Sze Chun +
Marc Garcia
Marius Potgieter +
Mark Sikora +
Markus Meier +
Marlene Silva Marchena +
Martin Babka +
MatanCohe +
Mateusz Woś +
Mathew Topper +
Matt Boggess +
Matt Cooper +
Matt Williams +
Matthew Gilbert
Matthew Roeschke
Max Kanter
Michael Odintsov
Michael Silverstein +
Michael-J-Ward +
Mickaël Schoentgen +
Miguel Sánchez de León Peque +
Ming Li
Mitar
Mitch Negus
Monson Shao +
Moonsoo Kim +
Mortada Mehyar
Myles Braithwaite
Nehil Jain +
Nicholas Musolino +
Nicolas Dickreuter +
Nikhil Kumar Mengani +
Nikoleta Glynatsi +
Ondrej Kokes
Pablo Ambrosio +
Pamela Wu +
Parfait G +
Patrick Park +
Paul
Paul Ganssle
Paul Reidy
Paul van Mulbregt +
Phillip Cloud
Pietro Battiston
Piyush Aggarwal +
Prabakaran Kumaresshan +
Pulkit Maloo
Pyry Kovanen
Rajib Mitra +
Redonnet Louis +
Rhys Parry +
Rick +
Robin
Roei.r +
RomainSa +
Roman Imankulov +
Roman Yurchak +
Ruijing Li +
Ryan +
Ryan Nazareth +
Rüdiger Busche +
SEUNG HOON, SHIN +
Sandrine Pataut +
Sangwoong Yoon
Santosh Kumar +
Saurav Chakravorty +
Scott McAllister +
Sean Chan +
Shadi Akiki +
Shengpu Tang +
Shirish Kadam +
Simon Hawkins +
Simon Riddell +
Simone Basso
Sinhrks
Soyoun(Rose) Kim +
Srinivas Reddy Thatiparthy (శ్రీనివాస్ రెడ్డి తాటిపర్తి) +
Stefaan Lippens +
Stefano Cianciulli
Stefano Miccoli +
Stephen Childs
Stephen Pascoe
Steve Baker +
Steve Cook +
Steve Dower +
Stéphan Taljaard +
Sumin Byeon +
Sören +
Tamas Nagy +
Tanya Jain +
Tarbo Fukazawa
Thein Oo +
Thiago Cordeiro da Fonseca +
Thierry Moisan
Thiviyan Thanapalasingam +
Thomas Lentali +
Tim D. Smith +
Tim Swast
Tom Augspurger
Tomasz Kluczkowski +
Tony Tao +
Triple0 +
Troels Nielsen +
Tuhin Mahmud +
Tyler Reddy +
Uddeshya Singh
Uwe L. Korn +
Vadym Barda +
Varad Gunjal +
Victor Maryama +
Victor Villas
Vincent La
Vitória Helena +
Vu Le
Vyom Jain +
Weiwen Gu +
Wenhuan
Wes Turner
Wil Tan +
William Ayd
Yeojin Kim +
Yitzhak Andrade +
Yuecheng Wu +
Yuliya Dovzhenko +
Yury Bayda +
Zac Hatfield-Dodds +
aberres +
aeltanawy +
ailchau +
alimcmaster1
alphaCTzo7G +
amphy +
araraonline +
azure-pipelines[bot] +
benarthur91 +
bk521234 +
cgangwar11 +
chris-b1
cxl923cc +
dahlbaek +
dannyhyunkim +
darke-spirits +
david-liu-brattle-1
davidmvalente +
deflatSOCO
doosik_bae +
dylanchase +
eduardo naufel schettino +
euri10 +
evangelineliu +
fengyqf +
fjdiod
fl4p +
fleimgruber +
gfyoung
h-vetinari
harisbal +
henriqueribeiro +
himanshu awasthi
hongshaoyang +
igorfassen +
jalazbe +
jbrockmendel
jh-wu +
justinchan23 +
louispotok
marcosrullan +
miker985
nicolab100 +
nprad
nsuresh +
ottiP
pajachiet +
raguiar2 +
ratijas +
realead +
robbuckley +
saurav2608 +
sideeye +
ssikdar1
svenharris +
syutbai +
testvinder +
thatneat
tmnhat2001
tomascassidy +
tomneep
topper-123
vkk800 +
winlu +
ym-pett +
yrhooke +
ywpark1 +
zertrin
zhezherun +