What’s new in 1.5.0 (??)¶
These are the changes in pandas 1.5.0. See Release notes for a full changelog including other versions of pandas.
Enhancements¶
DataFrame exchange protocol implementation¶
Pandas now implement the DataFrame exchange API spec. See the full details on the API at https://data-apis.org/dataframe-protocol/latest/index.html
The protocol consists of two parts:
New method
DataFrame.__dataframe__()
which produces the exchange object. It effectively “exports” the Pandas dataframe as an exchange object so any other library which has the protocol implemented can “import” that dataframe without knowing anything about the producer except that it makes an exchange object.New function
pandas.api.exchange.from_dataframe()
which can take an arbitrary exchange object from any conformant library and construct a Pandas DataFrame out of it.
Styler¶
The most notable development is the new method Styler.concat()
which
allows adding customised footer rows to visualise additional calculations on the data,
e.g. totals and counts etc. (GH43875, GH46186)
Additionally there is an alternative output method Styler.to_string()
,
which allows using the Styler’s formatting methods to create, for example, CSVs (GH44502).
Minor feature improvements are:
Adding the ability to render
border
andborder-{side}
CSS properties in Excel (GH42276)Making keyword arguments consist:
Styler.highlight_null()
now acceptscolor
and deprecatesnull_color
although this remains backwards compatible (GH45907)
Control of index with group_keys
in DataFrame.resample()
¶
The argument group_keys
has been added to the method DataFrame.resample()
.
As with DataFrame.groupby()
, this argument controls the whether each group is added
to the index in the resample when Resampler.apply()
is used.
Warning
Not specifying the group_keys
argument will retain the
previous behavior and emit a warning if the result will change
by specifying group_keys=False
. In a future version
of pandas, not specifying group_keys
will default to
the same behavior as group_keys=False
.
In [1]: df = pd.DataFrame(
...: {'a': range(6)},
...: index=pd.date_range("2021-01-01", periods=6, freq="8H")
...: )
...:
In [2]: df.resample("D", group_keys=True).apply(lambda x: x)
Out[2]:
a
2021-01-01 2021-01-01 00:00:00 0
2021-01-01 08:00:00 1
2021-01-01 16:00:00 2
2021-01-02 2021-01-02 00:00:00 3
2021-01-02 08:00:00 4
2021-01-02 16:00:00 5
[6 rows x 1 columns]
In [3]: df.resample("D", group_keys=False).apply(lambda x: x)
Out[3]:
a
2021-01-01 00:00:00 0
2021-01-01 08:00:00 1
2021-01-01 16:00:00 2
2021-01-02 00:00:00 3
2021-01-02 08:00:00 4
2021-01-02 16:00:00 5
[6 rows x 1 columns]
Previously, the resulting index would depend upon the values returned by apply
,
as seen in the following example.
In [1]: # pandas 1.3
In [2]: df.resample("D").apply(lambda x: x)
Out[2]:
a
2021-01-01 00:00:00 0
2021-01-01 08:00:00 1
2021-01-01 16:00:00 2
2021-01-02 00:00:00 3
2021-01-02 08:00:00 4
2021-01-02 16:00:00 5
In [3]: df.resample("D").apply(lambda x: x.reset_index())
Out[3]:
index a
2021-01-01 0 2021-01-01 00:00:00 0
1 2021-01-01 08:00:00 1
2 2021-01-01 16:00:00 2
2021-01-02 0 2021-01-02 00:00:00 3
1 2021-01-02 08:00:00 4
2 2021-01-02 16:00:00 5
Reading directly from TAR archives¶
I/O methods like read_csv()
or DataFrame.to_json()
now allow reading and writing
directly on TAR archives (GH44787).
df = pd.read_csv("./movement.tar.gz")
# ...
df.to_csv("./out.tar.gz")
This supports .tar
, .tar.gz
, .tar.bz
and .tar.xz2
archives.
The used compression method is inferred from the filename.
If the compression method cannot be inferred, use the compression
argument:
df = pd.read_csv(some_file_obj, compression={"method": "tar", "mode": "r:gz"}) # noqa F821
(mode
being one of tarfile.open
’s modes: https://docs.python.org/3/library/tarfile.html#tarfile.open)
Other enhancements¶
Series.map()
now raises whenarg
is dict butna_action
is not eitherNone
or'ignore'
(GH46588)MultiIndex.to_frame()
now supports the argumentallow_duplicates
and raises on duplicate labels if it is missing or False (GH45245)StringArray
now accepts array-likes containing nan-likes (None
,np.nan
) for thevalues
parameter in its constructor in addition to strings andpandas.NA
. (GH40839)Improved the rendering of
categories
inCategoricalIndex
(GH45218)to_numeric()
now preserves float64 arrays when downcasting would generate values not representable in float32 (GH43693)Series.reset_index()
andDataFrame.reset_index()
now support the argumentallow_duplicates
(GH44410)GroupBy.min()
andGroupBy.max()
now supports Numba execution with theengine
keyword (GH45428)read_csv()
now supportsdefaultdict
as adtype
parameter (GH41574)DataFrame.rolling()
andSeries.rolling()
now support astep
parameter with fixed-length windows (GH15354)Implemented a
bool
-dtypeIndex
, passing a bool-dtype array-like topd.Index
will now retainbool
dtype instead of casting toobject
(GH45061)Implemented a complex-dtype
Index
, passing a complex-dtype array-like topd.Index
will now retain complex dtype instead of casting toobject
(GH45845)Series
andDataFrame
withIntegerDtype
now supports bitwise operations (GH34463)Add
milliseconds
field support forDateOffset
(GH43371)DataFrame.reset_index()
now accepts anames
argument which renames the index names (GH6878)pd.concat()
now raises whenlevels
is given butkeys
is None (GH46653)pd.concat()
now raises whenlevels
contains duplicate values (GH46653)Added
numeric_only
argument toDataFrame.corr()
,DataFrame.corrwith()
,DataFrame.cov()
,DataFrame.idxmin()
,DataFrame.idxmax()
,DataFrameGroupBy.idxmin()
,DataFrameGroupBy.idxmax()
,GroupBy.var()
,GroupBy.std()
,GroupBy.sem()
, andDataFrameGroupBy.quantile()
(GH46560)A
errors.PerformanceWarning
is now thrown when usingstring[pyarrow]
dtype with methods that don’t dispatch topyarrow.compute
methods (GH42613, GH46725)Added
validate
argument toDataFrame.join()
(GH46622)A
errors.PerformanceWarning
is now thrown when usingstring[pyarrow]
dtype with methods that don’t dispatch topyarrow.compute
methods (GH42613)Added
numeric_only
argument toResampler.sum()
,Resampler.prod()
,Resampler.min()
,Resampler.max()
,Resampler.first()
, andResampler.last()
(GH46442)times
argument inExponentialMovingWindow
now acceptsnp.timedelta64
(GH47003)DataError
now exposed inpandas.errors
(GH27656)
Notable bug fixes¶
These are bug fixes that might have notable behavior changes.
Using dropna=True
with groupby
transforms¶
A transform is an operation whose result has the same size as its input. When the
result is a DataFrame
or Series
, it is also required that the
index of the result matches that of the input. In pandas 1.4, using
DataFrameGroupBy.transform()
or SeriesGroupBy.transform()
with null
values in the groups and dropna=True
gave incorrect results. Demonstrated by the
examples below, the incorrect results either contained incorrect values, or the result
did not have the same index as the input.
In [4]: df = pd.DataFrame({'a': [1, 1, np.nan], 'b': [2, 3, 4]})
Old behavior:
In [3]: # Value in the last row should be np.nan
df.groupby('a', dropna=True).transform('sum')
Out[3]:
b
0 5
1 5
2 5
In [3]: # Should have one additional row with the value np.nan
df.groupby('a', dropna=True).transform(lambda x: x.sum())
Out[3]:
b
0 5
1 5
In [3]: # The value in the last row is np.nan interpreted as an integer
df.groupby('a', dropna=True).transform('ffill')
Out[3]:
b
0 2
1 3
2 -9223372036854775808
In [3]: # Should have one additional row with the value np.nan
df.groupby('a', dropna=True).transform(lambda x: x)
Out[3]:
b
0 2
1 3
New behavior:
In [5]: df.groupby('a', dropna=True).transform('sum')
Out[5]:
b
0 5.0
1 5.0
2 NaN
[3 rows x 1 columns]
In [6]: df.groupby('a', dropna=True).transform(lambda x: x.sum())
Out[6]:
b
0 5.0
1 5.0
2 NaN
[3 rows x 1 columns]
In [7]: df.groupby('a', dropna=True).transform('ffill')
Out[7]:
b
0 2.0
1 3.0
2 NaN
[3 rows x 1 columns]
In [8]: df.groupby('a', dropna=True).transform(lambda x: x)
Out[8]:
b
0 2.0
1 3.0
2 NaN
[3 rows x 1 columns]
Serializing tz-naive Timestamps with to_json() with iso_dates=True
¶
DataFrame.to_json()
, Series.to_json()
, and Index.to_json()
would incorrectly localize DatetimeArrays/DatetimeIndexes with tz-naive Timestamps
to UTC. (GH38760)
Note that this patch does not fix the localization of tz-aware Timestamps to UTC upon serialization. (Related issue GH12997)
Old Behavior
In [9]: index = pd.date_range(
...: start='2020-12-28 00:00:00',
...: end='2020-12-28 02:00:00',
...: freq='1H',
...: )
...:
In [10]: a = pd.Series(
....: data=range(3),
....: index=index,
....: )
....:
In [4]: a.to_json(date_format='iso')
Out[4]: '{"2020-12-28T00:00:00.000Z":0,"2020-12-28T01:00:00.000Z":1,"2020-12-28T02:00:00.000Z":2}'
In [5]: pd.read_json(a.to_json(date_format='iso'), typ="series").index == a.index
Out[5]: array([False, False, False])
New Behavior
In [11]: a.to_json(date_format='iso')
Out[11]: '{"2020-12-28T00:00:00.000":0,"2020-12-28T01:00:00.000":1,"2020-12-28T02:00:00.000":2}'
# Roundtripping now works
In [12]: pd.read_json(a.to_json(date_format='iso'), typ="series").index == a.index
Out[12]: array([ True, True, True])
Backwards incompatible API changes¶
read_xml now supports dtype
, converters
, and parse_dates
¶
Similar to other IO methods, pandas.read_xml()
now supports assigning specific dtypes to columns,
apply converter methods, and parse dates (GH43567).
In [13]: xml_dates = """<?xml version='1.0' encoding='utf-8'?>
....: <data>
....: <row>
....: <shape>square</shape>
....: <degrees>00360</degrees>
....: <sides>4.0</sides>
....: <date>2020-01-01</date>
....: </row>
....: <row>
....: <shape>circle</shape>
....: <degrees>00360</degrees>
....: <sides/>
....: <date>2021-01-01</date>
....: </row>
....: <row>
....: <shape>triangle</shape>
....: <degrees>00180</degrees>
....: <sides>3.0</sides>
....: <date>2022-01-01</date>
....: </row>
....: </data>"""
....:
In [14]: df = pd.read_xml(
....: xml_dates,
....: dtype={'sides': 'Int64'},
....: converters={'degrees': str},
....: parse_dates=['date']
....: )
....:
In [15]: df
Out[15]:
shape degrees sides date
0 square 00360 4 2020-01-01
1 circle 00360 <NA> 2021-01-01
2 triangle 00180 3 2022-01-01
[3 rows x 4 columns]
In [16]: df.dtypes
Out[16]:
shape object
degrees object
sides Int64
date datetime64[ns]
Length: 4, dtype: object
read_xml now supports large XML using iterparse
¶
For very large XML files that can range in hundreds of megabytes to gigabytes, pandas.read_xml()
now supports parsing such sizeable files using lxml’s iterparse and etree’s iterparse
which are memory-efficient methods to iterate through XML trees and extract specific elements
and attributes without holding entire tree in memory (GH#45442).
In [1]: df = pd.read_xml(
... "/path/to/downloaded/enwikisource-latest-pages-articles.xml",
... iterparse = {"page": ["title", "ns", "id"]})
... )
df
Out[2]:
title ns id
0 Gettysburg Address 0 21450
1 Main Page 0 42950
2 Declaration by United Nations 0 8435
3 Constitution of the United States of America 0 8435
4 Declaration of Independence (Israel) 0 17858
... ... ... ...
3578760 Page:Black cat 1897 07 v2 n10.pdf/17 104 219649
3578761 Page:Black cat 1897 07 v2 n10.pdf/43 104 219649
3578762 Page:Black cat 1897 07 v2 n10.pdf/44 104 219649
3578763 The History of Tom Jones, a Foundling/Book IX 0 12084291
3578764 Page:Shakespeare of Stratford (1926) Yale.djvu/91 104 21450
[3578765 rows x 3 columns]
api_breaking_change2¶
Increased minimum versions for dependencies¶
Some minimum supported versions of dependencies were updated. If installed, we now require:
Package |
Minimum Version |
Required |
Changed |
---|---|---|---|
mypy (dev) |
0.950 |
X |
|
beautifulsoup4 |
4.9.3 |
X |
|
blosc |
1.21.0 |
X |
|
bottleneck |
1.3.2 |
X |
|
fsspec |
2021.05.0 |
X |
|
hypothesis |
6.13.0 |
X |
|
gcsfs |
2021.05.0 |
X |
|
jinja2 |
3.0.0 |
X |
|
lxml |
4.6.3 |
X |
|
numba |
0.53.1 |
X |
|
numexpr |
2.7.3 |
X |
|
openpyxl |
3.0.7 |
X |
|
pandas-gbq |
0.15.0 |
X |
|
psycopg2 |
2.8.6 |
X |
|
pymysql |
1.0.2 |
X |
|
pyreadstat |
1.1.2 |
X |
|
pyxlsb |
1.0.8 |
X |
|
s3fs |
2021.05.0 |
X |
|
scipy |
1.7.1 |
X |
|
sqlalchemy |
1.4.16 |
X |
|
tabulate |
0.8.9 |
X |
|
xarray |
0.19.0 |
X |
|
xlsxwriter |
1.4.3 |
X |
For optional libraries the general recommendation is to use the latest version. The following table lists the lowest version per library that is currently being tested throughout the development of pandas. Optional libraries below the lowest tested version may still work, but are not considered supported.
Package |
Minimum Version |
Changed |
---|---|---|
X |
See Dependencies and Optional dependencies for more.
Other API changes¶
BigQuery I/O methods
read_gbq()
andDataFrame.to_gbq()
default toauth_local_webserver = True
. Google has deprecated theauth_local_webserver = False
“out of band” (copy-paste) flow. Theauth_local_webserver = False
option is planned to stop working in October 2022. (GH46312)
Deprecations¶
In a future version, integer slicing on a Series
with a Int64Index
or RangeIndex
will be treated as label-based, not positional. This will make the behavior consistent with other Series.__getitem__()
and Series.__setitem__()
behaviors (GH45162).
For example:
In [17]: ser = pd.Series([1, 2, 3, 4, 5], index=[2, 3, 5, 7, 11])
In the old behavior, ser[2:4]
treats the slice as positional:
Old behavior:
In [3]: ser[2:4]
Out[3]:
5 3
7 4
dtype: int64
In a future version, this will be treated as label-based:
Future behavior:
In [4]: ser.loc[2:4]
Out[4]:
2 1
3 2
dtype: int64
To retain the old behavior, use series.iloc[i:j]
. To get the future behavior,
use series.loc[i:j]
.
Slicing on a DataFrame
will not be affected.
ExcelWriter
attributes¶
All attributes of ExcelWriter
were previously documented as not
public. However some third party Excel engines documented accessing
ExcelWriter.book
or ExcelWriter.sheets
, and users were utilizing these
and possibly other attributes. Previously these attributes were not safe to use;
e.g. modifications to ExcelWriter.book
would not update ExcelWriter.sheets
and conversely. In order to support this, pandas has made some attributes public
and improved their implementations so that they may now be safely used. (GH45572)
The following attributes are now public and considered safe to access.
book
check_extension
close
date_format
datetime_format
engine
if_sheet_exists
sheets
supported_extensions
The following attributes have been deprecated. They now raise a FutureWarning
when accessed and will be removed in a future version. Users should be aware
that their usage is considered unsafe, and can lead to unexpected results.
cur_sheet
handles
path
save
write_cells
See the documentation of ExcelWriter
for further details.
Using group_keys
with transformers in GroupBy.apply()
¶
In previous versions of pandas, if it was inferred that the function passed to
GroupBy.apply()
was a transformer (i.e. the resulting index was equal to
the input index), the group_keys
argument of DataFrame.groupby()
and
Series.groupby()
was ignored and the group keys would never be added to
the index of the result. In the future, the group keys will be added to the index
when the user specifies group_keys=True
.
As group_keys=True
is the default value of DataFrame.groupby()
and
Series.groupby()
, not specifying group_keys
with a transformer will
raise a FutureWarning
. This can be silenced and the previous behavior
retained by specifying group_keys=False
.
Try operating inplace when setting values with loc
and iloc
¶
Most of the time setting values with frame.iloc
attempts to set values
in-place, only falling back to inserting a new array if necessary. In the past,
setting entire columns has been an exception to this rule:
In [18]: values = np.arange(4).reshape(2, 2)
In [19]: df = pd.DataFrame(values)
In [20]: ser = df[0]
Old behavior:
In [3]: df.iloc[:, 0] = np.array([10, 11])
In [4]: ser
Out[4]:
0 0
1 2
Name: 0, dtype: int64
This behavior is deprecated. In a future version, setting an entire column with iloc will attempt to operate inplace.
Future behavior:
In [3]: df.iloc[:, 0] = np.array([10, 11])
In [4]: ser
Out[4]:
0 10
1 11
Name: 0, dtype: int64
To get the old behavior, use DataFrame.__setitem__()
directly:
Future behavior:
In [5]: df[0] = np.array([21, 31])
In [4]: ser
Out[4]:
0 10
1 11
Name: 0, dtype: int64
In the case where df.columns
is not unique, use DataFrame.isetitem()
:
Future behavior:
In [5]: df.columns = ["A", "A"]
In [5]: df.isetitem(0, np.array([21, 31]))
In [4]: ser
Out[4]:
0 10
1 11
Name: 0, dtype: int64
numeric_only
default value¶
Across the DataFrame and DataFrameGroupBy operations such as
min
, sum
, and idxmax
, the default
value of the numeric_only
argument, if it exists at all, was inconsistent.
Furthermore, operations with the default value None
can lead to surprising
results. (GH46560)
In [1]: df = pd.DataFrame({"a": [1, 2], "b": ["x", "y"]})
In [2]: # Reading the next line without knowing the contents of df, one would
# expect the result to contain the products for both columns a and b.
df[["a", "b"]].prod()
Out[2]:
a 2
dtype: int64
To avoid this behavior, the specifying the value numeric_only=None
has been
deprecated, and will be removed in a future version of pandas. In the future,
all operations with a numeric_only
argument will default to False
. Users
should either call the operation only with columns that can be operated on, or
specify numeric_only=True
to operate only on Boolean, integer, and float columns.
In order to support the transition to the new behavior, the following methods have
gained the numeric_only
argument.
Other Deprecations¶
Deprecated the keyword
line_terminator
inDataFrame.to_csv()
andSeries.to_csv()
, uselineterminator
instead; this is for consistency withread_csv()
and the standard library ‘csv’ module (GH9568)Deprecated behavior of
SparseArray.astype()
,Series.astype()
, andDataFrame.astype()
withSparseDtype
when passing a non-sparsedtype
. In a future version, this will cast to that non-sparse dtype instead of wrapping it in aSparseDtype
(GH34457)Deprecated behavior of
DatetimeIndex.intersection()
andDatetimeIndex.symmetric_difference()
(union
behavior was already deprecated in version 1.3.0) with mixed time zones; in a future version both will be cast to UTC instead of object dtype (GH39328, GH45357)Deprecated
DataFrame.iteritems()
,Series.iteritems()
,HDFStore.iteritems()
in favor ofDataFrame.items()
,Series.items()
,HDFStore.items()
(GH45321)Deprecated
Series.is_monotonic()
andIndex.is_monotonic()
in favor ofSeries.is_monotonic_increasing()
andIndex.is_monotonic_increasing()
(GH45422, GH21335)Deprecated behavior of
DatetimeIndex.astype()
,TimedeltaIndex.astype()
,PeriodIndex.astype()
when converting to an integer dtype other thanint64
. In a future version, these will convert to exactly the specified dtype (instead of alwaysint64
) and will raise if the conversion overflows (GH45034)Deprecated the
__array_wrap__
method of DataFrame and Series, rely on standard numpy ufuncs instead (GH45451)Deprecated treating float-dtype data as wall-times when passed with a timezone to
Series
orDatetimeIndex
(GH45573)Deprecated the behavior of
Series.fillna()
andDataFrame.fillna()
withtimedelta64[ns]
dtype and incompatible fill value; in a future version this will cast to a common dtype (usually object) instead of raising, matching the behavior of other dtypes (GH45746)Deprecated the
warn
parameter ininfer_freq()
(GH45947)Deprecated allowing non-keyword arguments in
ExtensionArray.argsort()
(GH46134)Deprecated treating all-bool
object
-dtype columns as bool-like inDataFrame.any()
andDataFrame.all()
withbool_only=True
, explicitly cast to bool instead (GH46188)Deprecated behavior of method
DataFrame.quantile()
, attributenumeric_only
will default False. Including datetime/timedelta columns in the result (GH7308).Deprecated
Timedelta.freq
andTimedelta.is_populated
(GH46430)Deprecated
Timedelta.delta
(GH46476)Deprecated passing arguments as positional in
DataFrame.any()
andSeries.any()
(GH44802)Deprecated the
closed
argument ininterval_range()
in favor ofinclusive
argument; In a future version passingclosed
will raise (GH40245)Deprecated the methods
DataFrame.mad()
,Series.mad()
, and the corresponding groupby methods (GH11787)Deprecated positional arguments to
Index.join()
except forother
, use keyword-only arguments instead of positional arguments (GH46518)Deprecated indexing on a timezone-naive
DatetimeIndex
using a string representing a timezone-aware datetime (GH46903, GH36148)
Performance improvements¶
Performance improvement in
DataFrame.corrwith()
for column-wise (axis=0) Pearson and Spearman correlation when other is aSeries
(GH46174)Performance improvement in
GroupBy.transform()
for some user-defined DataFrame -> Series functions (GH45387)Performance improvement in
DataFrame.duplicated()
when subset consists of only one column (GH45236)Performance improvement in
GroupBy.diff()
(GH16706)Performance improvement in
GroupBy.transform()
when broadcasting values for user-defined functions (GH45708)Performance improvement in
GroupBy.transform()
for user-defined functions when only a single group exists (GH44977)Performance improvement in
DataFrame.loc()
andSeries.loc()
for tuple-based indexing of aMultiIndex
(GH45681, GH46040, GH46330)Performance improvement in
MultiIndex.values
when the MultiIndex contains levels of type DatetimeIndex, TimedeltaIndex or ExtensionDtypes (GH46288)Performance improvement in
merge()
when left and/or right are empty (GH45838)Performance improvement in
DataFrame.join()
when left and/or right are empty (GH46015)Performance improvement in
DataFrame.reindex()
andSeries.reindex()
when target is aMultiIndex
(GH46235)Performance improvement when setting values in a pyarrow backed string array (GH46400)
Performance improvement in
factorize()
(GH46109)Performance improvement in
DataFrame
andSeries
constructors for extension dtype scalars (GH45854)
Bug fixes¶
Categorical¶
Datetimelike¶
Bug in
DataFrame.quantile()
with datetime-like dtypes and no rows incorrectly returningfloat64
dtype instead of retaining datetime-like dtype (GH41544)Bug in
to_datetime()
with sequences ofnp.str_
objects incorrectly raising (GH32264)Bug in
Timestamp
construction when passing datetime components as positional arguments andtzinfo
as a keyword argument incorrectly raising (GH31929)Bug in
Index.astype()
when casting from object dtype totimedelta64[ns]
dtype incorrectly castingnp.datetime64("NaT")
values tonp.timedelta64("NaT")
instead of raising (GH45722)Bug in
SeriesGroupBy.value_counts()
index when passing categorical column (GH44324)Bug in
DatetimeIndex.tz_localize()
localizing to UTC failing to make a copy of the underlying data (GH46460)Bug in
DatetimeIndex.resolution()
incorrectly returning “day” instead of “nanosecond” for nanosecond-resolution indexes (GH46903)
Timedelta¶
Time Zones¶
Numeric¶
Bug in operations with array-likes with
dtype="boolean"
andNA
incorrectly altering the array in-place (GH45421)Bug in division,
pow
andmod
operations on array-likes withdtype="boolean"
not being like theirnp.bool_
counterparts (GH46063)Bug in multiplying a
Series
withIntegerDtype
orFloatingDtype
by an array-like withtimedelta64[ns]
dtype incorrectly raising (GH45622)
Conversion¶
Bug in
DataFrame.astype()
not preserving subclasses (GH40810)Bug in constructing a
Series
from a float-containing list or a floating-dtype ndarray-like (e.g.dask.Array
) and an integer dtype raising instead of casting like we would with annp.ndarray
(GH40110)Bug in
Float64Index.astype()
to unsigned integer dtype incorrectly casting tonp.int64
dtype (GH45309)Bug in
Series.astype()
andDataFrame.astype()
from floating dtype to unsigned integer dtype failing to raise in the presence of negative values (GH45151)Bug in
array()
withFloatingDtype
and values containing float-castable strings incorrectly raising (GH45424)Bug when comparing string and datetime64ns objects causing
OverflowError
exception. (GH45506)Bug in metaclass of generic abstract dtypes causing
DataFrame.apply()
andSeries.apply()
to raise for the built-in functiontype
(GH46684)Bug in
DataFrame.to_dict()
fororient="list"
ororient="index"
was not returning native types (GH46751)
Strings¶
Bug in
str.startswith()
andstr.endswith()
when using other series as parameter _pat_. Now raisesTypeError
(GH3485)
Interval¶
Bug in
IntervalArray.__setitem__()
when settingnp.nan
into an integer-backed array raisingValueError
instead ofTypeError
(GH45484)
Indexing¶
Bug in
loc.__getitem__()
with a list of keys causing an internal inconsistency that could lead to a disconnect betweenframe.at[x, y]
vsframe[y].loc[x]
(GH22372)Bug in
DataFrame.iloc()
where indexing a single row on aDataFrame
with a single ExtensionDtype column gave a copy instead of a view on the underlying data (GH45241)Bug in
Series.align()
does not createMultiIndex
with union of levels when both MultiIndexes intersections are identical (GH45224)Bug in setting a NA value (
None
ornp.nan
) into aSeries
with int-basedIntervalDtype
incorrectly casting to object dtype instead of a float-basedIntervalDtype
(GH45568)Bug in indexing setting values into an
ExtensionDtype
column withdf.iloc[:, i] = values
withvalues
having the same dtype asdf.iloc[:, i]
incorrectly inserting a new array instead of setting in-place (GH33457)Bug in
Series.__setitem__()
with a non-integerIndex
when using an integer key to set a value that cannot be set inplace where aValueError
was raised instead of casting to a common dtype (GH45070)Bug in
Series.__setitem__()
when setting incompatible values into aPeriodDtype
orIntervalDtype
Series
raising when indexing with a boolean mask but coercing when indexing with otherwise-equivalent indexers; these now consistently coerce, along withSeries.mask()
andSeries.where()
(GH45768)Bug in
DataFrame.where()
with multiple columns with datetime-like dtypes failing to downcast results consistent with other dtypes (GH45837)Bug in
Series.loc.__setitem__()
andSeries.loc.__getitem__()
not raising when using multiple keys without using aMultiIndex
(GH13831)Bug in
Index.reindex()
raisingAssertionError
whenlevel
was specified but noMultiIndex
was given; level is ignored now (GH35132)Bug when setting a value too large for a
Series
dtype failing to coerce to a common type (GH26049, GH32878)Bug in
loc.__setitem__()
treatingrange
keys as positional instead of label-based (GH45479)Bug in
Series.__setitem__()
when settingboolean
dtype values containingNA
incorrectly raising instead of casting toboolean
dtype (GH45462)Bug in
Series.__setitem__()
where settingNA
into a numeric-dtpyeSeries
would incorrectly upcast to object-dtype rather than treating the value asnp.nan
(GH44199)Bug in
Series.__setitem__()
withdatetime64[ns]
dtype, an all-False
boolean mask, and an incompatible value incorrectly casting toobject
instead of retainingdatetime64[ns]
dtype (GH45967)Bug in
Index.__getitem__()
raisingValueError
when indexer is from boolean dtype withNA
(GH45806)Bug in
Series.mask()
withinplace=True
or setting values with a boolean mask with small integer dtypes incorrectly raising (GH45750)Bug in
DataFrame.mask()
withinplace=True
andExtensionDtype
columns incorrectly raising (GH45577)Bug in getting a column from a DataFrame with an object-dtype row index with datetime-like values: the resulting Series now preserves the exact object-dtype Index from the parent DataFrame (GH42950)
Bug in
DataFrame.__getattribute__()
raisingAttributeError
if columns have"string"
dtype (GH46185)Bug in indexing on a
DatetimeIndex
with anp.str_
key incorrectly raising (GH45580)Bug in
CategoricalIndex.get_indexer()
when index containsNaN
values, resulting in elements that are in target but not present in the index to be mapped to the index of the NaN element, instead of -1 (GH45361)Bug in setting large integer values into
Series
withfloat32
orfloat16
dtype incorrectly altering these values instead of coercing tofloat64
dtype (GH45844)Bug in
Series.asof()
andDataFrame.asof()
incorrectly casting bool-dtype results tofloat64
dtype (GH16063)Bug in
NDFrame.xs()
,DataFrame.iterrows()
,DataFrame.loc()
andDataFrame.iloc()
not always propagating metadata (GH28283)
Missing¶
Bug in
Series.fillna()
andDataFrame.fillna()
withdowncast
keyword not being respected in some cases where there are no NA values present (GH45423)Bug in
Series.fillna()
andDataFrame.fillna()
withIntervalDtype
and incompatible value raising instead of casting to a common (usually object) dtype (GH45796)Bug in
DataFrame.interpolate()
with object-dtype column not returning a copy withinplace=False
(GH45791)Bug in
DataFrame.dropna()
allows to set bothhow
andthresh
incompatible arguments (GH46575)
MultiIndex¶
Bug in
DataFrame.loc()
returning empty result when slicing aMultiIndex
with a negative step size and non-null start/stop values (GH46156)Bug in
DataFrame.loc()
raising when slicing aMultiIndex
with a negative step size other than -1 (GH46156)Bug in
DataFrame.loc()
raising when slicing aMultiIndex
with a negative step size and slicing a non-int labeled index level (GH46156)Bug in
Series.to_numpy()
where multiindexed Series could not be converted to numpy arrays when anna_value
was supplied (GH45774)Bug in
MultiIndex.equals
not commutative when only one side has extension array dtype (GH46026)Bug in
MultiIndex.from_tuples()
cannot construct Index of empty tuples (GH45608)
I/O¶
Bug in
DataFrame.to_stata()
where no error is raised if theDataFrame
contains-np.inf
(GH45350)Bug in
read_excel()
results in an infinite loop with certainskiprows
callables (GH45585)Bug in
DataFrame.info()
where a new line at the end of the output is omitted when called on an emptyDataFrame
(GH45494)Bug in
read_csv()
not recognizing line break foron_bad_lines="warn"
forengine="c"
(GH41710)Bug in
DataFrame.to_csv()
not respectingfloat_format
forFloat64
dtype (GH45991)Bug in
read_csv()
not respecting a specified converter to index columns in all cases (GH40589)Bug in
read_parquet()
whenengine="pyarrow"
which caused partial write to disk when column of unsupported datatype was passed (GH44914)Bug in
DataFrame.to_excel()
andExcelWriter
would raise when writing an empty DataFrame to a.ods
file (GH45793)Bug in
read_html()
where elements surrounding<br>
were joined without a space between them (GH29528)Bug in Parquet roundtrip for Interval dtype with
datetime64[ns]
subtype (GH45881)Bug in
read_excel()
when reading a.ods
file with newlines between xml elements (GH45598)Bug in
read_parquet()
whenengine="fastparquet"
where the file was not closed on error (GH46555)to_html()
now excludes theborder
attribute from<table>
elements whenborder
keyword is set toFalse
.
Period¶
Bug in subtraction of
Period
fromPeriodArray
returning wrong results (GH45999)Bug in
Period.strftime()
andPeriodIndex.strftime()
, directives%l
and%u
were giving wrong results (GH46252)Bug in inferring an incorrect
freq
when passing a string toPeriod
microseconds that are a multiple of 1000 (GH46811)Bug in constructing a
Period
from aTimestamp
ornp.datetime64
object with non-zero nanoseconds andfreq="ns"
incorrectly truncating the nanoseconds (GH46811)
Plotting¶
Bug in
DataFrame.plot.barh()
that prevented labeling the x-axis andxlabel
updating the y-axis label (GH45144)Bug in
DataFrame.plot.box()
that prevented labeling the x-axis (GH45463)Bug in
DataFrame.boxplot()
that prevented passing inxlabel
andylabel
(GH45463)Bug in
DataFrame.boxplot()
that prevented specifyingvert=False
(GH36918)Bug in
DataFrame.plot.scatter()
that prevented specifyingnorm
(GH45809)The function
DataFrame.plot.scatter()
now acceptscolor
as an alias forc
andsize
as an alias fors
for consistency to other plotting functions (GH44670)Fix showing “None” as ylabel in
Series.plot()
when not setting ylabel (GH46129)
Groupby/resample/rolling¶
Bug in
DataFrame.resample()
ignoringclosed="right"
onTimedeltaIndex
(GH45414)Bug in
DataFrameGroupBy.transform()
fails whenfunc="size"
and the input DataFrame has multiple columns (GH27469)Bug in
DataFrameGroupBy.size()
andDataFrameGroupBy.transform()
withfunc="size"
produced incorrect results whenaxis=1
(GH45715)Bug in
ExponentialMovingWindow.mean()
withaxis=1
andengine='numba'
when theDataFrame
has more columns than rows (GH46086)Bug when using
engine="numba"
would return the same jitted function when modifyingengine_kwargs
(GH46086)Bug in
DataFrameGroupby.transform()
fails whenaxis=1
andfunc
is"first"
or"last"
(GH45986)Bug in
DataFrameGroupby.cumsum()
withskipna=False
giving incorrect results (GH46216)Bug in
GroupBy.cumsum()
withtimedelta64[ns]
dtype failing to recognizeNaT
as a null value (GH46216)Bug in
GroupBy.cummin()
andGroupBy.cummax()
with nullable dtypes incorrectly altering the original data in place (GH46220)Bug in
GroupBy.cummax()
withint64
dtype with leading value being the smallest possible int64 (GH46382)Bug in
GroupBy.max()
with empty groups anduint64
dtype incorrectly raisingRuntimeError
(GH46408)Bug in
GroupBy.apply()
would fail whenfunc
was a string and args or kwargs were supplied (GH46479)Bug in
SeriesGroupBy.apply()
would incorrectly name its result when there was a unique group (GH46369)Bug in
Rolling.sum()
andRolling.mean()
would give incorrect result with window of same values (GH42064, GH46431)Bug in
Rolling.var()
andRolling.std()
would give non-zero result with window of same values (GH42064)Bug in
Rolling.skew()
andRolling.kurt()
would give NaN with window of same values (GH30993)Bug in
Rolling.var()
would segfault calculating weighted variance when window size was larger than data size (GH46760)Bug in
Grouper.__repr__()
wheredropna
was not included. Now it is (GH46754)Bug in
DataFrame.rolling()
gives ValueError when center=True, axis=1 and win_type is specified (GH46135)Bug in
DataFrameGroupBy.describe()
andSeriesGroupBy.describe()
produces inconsistent results for empty datasets (GH41575)
Reshaping¶
Bug in
concat()
between aSeries
with integer dtype and another withCategoricalDtype
with integer categories and containingNaN
values casting to object dtype instead offloat64
(GH45359)Bug in
get_dummies()
that selected object and categorical dtypes but not string (GH44965)Bug in
DataFrame.align()
when aligning aMultiIndex
to aSeries
with anotherMultiIndex
(GH46001)Bug in concanenation with
IntegerDtype
, orFloatingDtype
arrays where the resulting dtype did not mirror the behavior of the non-nullable dtypes (GH46379)Bug in
concat()
with identical key leads to error when indexingMultiIndex
(GH46519)Bug in
DataFrame.join()
with a list when using suffixes to join DataFrames with duplicate column names (GH46396)Bug in
DataFrame.pivot_table()
withsort=False
results in sorted index (GH17041)
Sparse¶
Bug in
Series.where()
andDataFrame.where()
withSparseDtype
failing to retain the array’sfill_value
(GH45691)
ExtensionArray¶
Bug in
IntegerArray.searchsorted()
andFloatingArray.searchsorted()
returning inconsistent results when acting onnp.nan
(GH45255)
Styler¶
Metadata¶
Fixed metadata propagation in
DataFrame.melt()
(GH28283)Fixed metadata propagation in
DataFrame.explode()
(GH28283)