What’s new in 1.5.0 (September 19, 2022)#
These are the changes in pandas 1.5.0. See Release notes for a full changelog including other versions of pandas.
Enhancements#
pandas-stubs
#
The pandas-stubs
library is now supported by the pandas development team, providing type stubs for the pandas API. Please visit
https://github.com/pandas-dev/pandas-stubs for more information.
We thank VirtusLab and Microsoft for their initial, significant contributions to pandas-stubs
Native PyArrow-backed ExtensionArray#
With Pyarrow installed, users can now create pandas objects
that are backed by a pyarrow.ChunkedArray
and pyarrow.DataType
.
The dtype
argument can accept a string of a pyarrow data type
with pyarrow
in brackets e.g. "int64[pyarrow]"
or, for pyarrow data types that take parameters, a ArrowDtype
initialized with a pyarrow.DataType
.
In [1]: import pyarrow as pa
In [2]: ser_float = pd.Series([1.0, 2.0, None], dtype="float32[pyarrow]")
In [3]: ser_float
Out[3]:
0 1.0
1 2.0
2 <NA>
Length: 3, dtype: float[pyarrow]
In [4]: list_of_int_type = pd.ArrowDtype(pa.list_(pa.int64()))
In [5]: ser_list = pd.Series([[1, 2], [3, None]], dtype=list_of_int_type)
In [6]: ser_list
Out[6]:
0 [1 2]
1 [3 None]
Length: 2, dtype: list<item: int64>[pyarrow]
In [7]: ser_list.take([1, 0])
Out[7]:
1 [3 None]
0 [1 2]
Length: 2, dtype: list<item: int64>[pyarrow]
In [8]: ser_float * 5
Out[8]:
0 5.0
1 10.0
2 <NA>
Length: 3, dtype: float[pyarrow]
In [9]: ser_float.mean()
Out[9]: 1.5
In [10]: ser_float.dropna()
Out[10]:
0 1.0
1 2.0
Length: 2, dtype: float[pyarrow]
Most operations are supported and have been implemented using pyarrow compute functions. We recommend installing the latest version of PyArrow to access the most recently implemented compute functions.
Warning
This feature is experimental, and the API can change in a future release without warning.
DataFrame interchange protocol implementation#
Pandas now implement the DataFrame interchange API spec. See the full details on the API at https://data-apis.org/dataframe-protocol/latest/index.html
The protocol consists of two parts:
New method
DataFrame.__dataframe__()
which produces the interchange object. It effectively “exports” the pandas dataframe as an interchange object so any other library which has the protocol implemented can “import” that dataframe without knowing anything about the producer except that it makes an interchange object.New function
pandas.api.interchange.from_dataframe()
which can take an arbitrary interchange object from any conformant library and construct a pandas DataFrame out of it.
Styler#
The most notable development is the new method Styler.concat()
which
allows adding customised footer rows to visualise additional calculations on the data,
e.g. totals and counts etc. (GH43875, GH46186)
Additionally there is an alternative output method Styler.to_string()
,
which allows using the Styler’s formatting methods to create, for example, CSVs (GH44502).
A new feature Styler.relabel_index()
is also made available to provide full customisation of the display of
index or column headers (GH47864)
Minor feature improvements are:
Adding the ability to render
border
andborder-{side}
CSS properties in Excel (GH42276)Making keyword arguments consist:
Styler.highlight_null()
now acceptscolor
and deprecatesnull_color
although this remains backwards compatible (GH45907)
Control of index with group_keys
in DataFrame.resample()
#
The argument group_keys
has been added to the method DataFrame.resample()
.
As with DataFrame.groupby()
, this argument controls the whether each group is added
to the index in the resample when Resampler.apply()
is used.
Warning
Not specifying the group_keys
argument will retain the
previous behavior and emit a warning if the result will change
by specifying group_keys=False
. In a future version
of pandas, not specifying group_keys
will default to
the same behavior as group_keys=False
.
In [11]: df = pd.DataFrame(
....: {'a': range(6)},
....: index=pd.date_range("2021-01-01", periods=6, freq="8H")
....: )
....:
In [12]: df.resample("D", group_keys=True).apply(lambda x: x)
Out[12]:
a
2021-01-01 2021-01-01 00:00:00 0
2021-01-01 08:00:00 1
2021-01-01 16:00:00 2
2021-01-02 2021-01-02 00:00:00 3
2021-01-02 08:00:00 4
2021-01-02 16:00:00 5
[6 rows x 1 columns]
In [13]: df.resample("D", group_keys=False).apply(lambda x: x)
Out[13]:
a
2021-01-01 00:00:00 0
2021-01-01 08:00:00 1
2021-01-01 16:00:00 2
2021-01-02 00:00:00 3
2021-01-02 08:00:00 4
2021-01-02 16:00:00 5
[6 rows x 1 columns]
Previously, the resulting index would depend upon the values returned by apply
,
as seen in the following example.
In [1]: # pandas 1.3
In [2]: df.resample("D").apply(lambda x: x)
Out[2]:
a
2021-01-01 00:00:00 0
2021-01-01 08:00:00 1
2021-01-01 16:00:00 2
2021-01-02 00:00:00 3
2021-01-02 08:00:00 4
2021-01-02 16:00:00 5
In [3]: df.resample("D").apply(lambda x: x.reset_index())
Out[3]:
index a
2021-01-01 0 2021-01-01 00:00:00 0
1 2021-01-01 08:00:00 1
2 2021-01-01 16:00:00 2
2021-01-02 0 2021-01-02 00:00:00 3
1 2021-01-02 08:00:00 4
2 2021-01-02 16:00:00 5
from_dummies#
Added new function from_dummies()
to convert a dummy coded DataFrame
into a categorical DataFrame
.
In [14]: import pandas as pd
In [15]: df = pd.DataFrame({"col1_a": [1, 0, 1], "col1_b": [0, 1, 0],
....: "col2_a": [0, 1, 0], "col2_b": [1, 0, 0],
....: "col2_c": [0, 0, 1]})
....:
In [16]: pd.from_dummies(df, sep="_")
Out[16]:
col1 col2
0 a b
1 b a
2 a c
[3 rows x 2 columns]
Writing to ORC files#
The new method DataFrame.to_orc()
allows writing to ORC files (GH43864).
This functionality depends the pyarrow library. For more details, see the IO docs on ORC.
Warning
It is highly recommended to install pyarrow using conda due to some issues occurred by pyarrow.
to_orc()
requires pyarrow>=7.0.0.to_orc()
is not supported on Windows yet, you can find valid environments on install optional dependencies.For supported dtypes please refer to supported ORC features in Arrow.
Currently timezones in datetime columns are not preserved when a dataframe is converted into ORC files.
df = pd.DataFrame(data={"col1": [1, 2], "col2": [3, 4]})
df.to_orc("./out.orc")
Reading directly from TAR archives#
I/O methods like read_csv()
or DataFrame.to_json()
now allow reading and writing
directly on TAR archives (GH44787).
df = pd.read_csv("./movement.tar.gz")
# ...
df.to_csv("./out.tar.gz")
This supports .tar
, .tar.gz
, .tar.bz
and .tar.xz2
archives.
The used compression method is inferred from the filename.
If the compression method cannot be inferred, use the compression
argument:
df = pd.read_csv(some_file_obj, compression={"method": "tar", "mode": "r:gz"}) # noqa F821
(mode
being one of tarfile.open
’s modes: https://docs.python.org/3/library/tarfile.html#tarfile.open)
read_xml now supports dtype
, converters
, and parse_dates
#
Similar to other IO methods, pandas.read_xml()
now supports assigning specific dtypes to columns,
apply converter methods, and parse dates (GH43567).
In [17]: xml_dates = """<?xml version='1.0' encoding='utf-8'?>
....: <data>
....: <row>
....: <shape>square</shape>
....: <degrees>00360</degrees>
....: <sides>4.0</sides>
....: <date>2020-01-01</date>
....: </row>
....: <row>
....: <shape>circle</shape>
....: <degrees>00360</degrees>
....: <sides/>
....: <date>2021-01-01</date>
....: </row>
....: <row>
....: <shape>triangle</shape>
....: <degrees>00180</degrees>
....: <sides>3.0</sides>
....: <date>2022-01-01</date>
....: </row>
....: </data>"""
....:
In [18]: df = pd.read_xml(
....: xml_dates,
....: dtype={'sides': 'Int64'},
....: converters={'degrees': str},
....: parse_dates=['date']
....: )
....:
In [19]: df
Out[19]:
shape degrees sides date
0 square 00360 4 2020-01-01
1 circle 00360 <NA> 2021-01-01
2 triangle 00180 3 2022-01-01
[3 rows x 4 columns]
In [20]: df.dtypes
Out[20]:
shape object
degrees object
sides Int64
date datetime64[ns]
Length: 4, dtype: object
read_xml now supports large XML using iterparse
#
For very large XML files that can range in hundreds of megabytes to gigabytes, pandas.read_xml()
now supports parsing such sizeable files using lxml’s iterparse and etree’s iterparse
which are memory-efficient methods to iterate through XML trees and extract specific elements
and attributes without holding entire tree in memory (GH45442).
In [1]: df = pd.read_xml(
... "/path/to/downloaded/enwikisource-latest-pages-articles.xml",
... iterparse = {"page": ["title", "ns", "id"]})
... )
df
Out[2]:
title ns id
0 Gettysburg Address 0 21450
1 Main Page 0 42950
2 Declaration by United Nations 0 8435
3 Constitution of the United States of America 0 8435
4 Declaration of Independence (Israel) 0 17858
... ... ... ...
3578760 Page:Black cat 1897 07 v2 n10.pdf/17 104 219649
3578761 Page:Black cat 1897 07 v2 n10.pdf/43 104 219649
3578762 Page:Black cat 1897 07 v2 n10.pdf/44 104 219649
3578763 The History of Tom Jones, a Foundling/Book IX 0 12084291
3578764 Page:Shakespeare of Stratford (1926) Yale.djvu/91 104 21450
[3578765 rows x 3 columns]
Copy on Write#
A new feature copy_on_write
was added (GH46958). Copy on write ensures that
any DataFrame or Series derived from another in any way always behaves as a copy.
Copy on write disallows updating any other object than the object the method
was applied to.
Copy on write can be enabled through:
pd.set_option("mode.copy_on_write", True)
pd.options.mode.copy_on_write = True
Alternatively, copy on write can be enabled locally through:
with pd.option_context("mode.copy_on_write", True):
...
Without copy on write, the parent DataFrame
is updated when updating a child
DataFrame
that was derived from this DataFrame
.
In [21]: df = pd.DataFrame({"foo": [1, 2, 3], "bar": 1})
In [22]: view = df["foo"]
In [23]: view.iloc[0]
Out[23]: 1
In [24]: df
Out[24]:
foo bar
0 1 1
1 2 1
2 3 1
[3 rows x 2 columns]
With copy on write enabled, df won’t be updated anymore:
In [25]: with pd.option_context("mode.copy_on_write", True):
....: df = pd.DataFrame({"foo": [1, 2, 3], "bar": 1})
....: view = df["foo"]
....: view.iloc[0]
....: df
....:
A more detailed explanation can be found here.
Other enhancements#
Series.map()
now raises whenarg
is dict butna_action
is not eitherNone
or'ignore'
(GH46588)MultiIndex.to_frame()
now supports the argumentallow_duplicates
and raises on duplicate labels if it is missing or False (GH45245)StringArray
now accepts array-likes containing nan-likes (None
,np.nan
) for thevalues
parameter in its constructor in addition to strings andpandas.NA
. (GH40839)Improved the rendering of
categories
inCategoricalIndex
(GH45218)DataFrame.plot()
will now allow thesubplots
parameter to be a list of iterables specifying column groups, so that columns may be grouped together in the same subplot (GH29688).to_numeric()
now preserves float64 arrays when downcasting would generate values not representable in float32 (GH43693)Series.reset_index()
andDataFrame.reset_index()
now support the argumentallow_duplicates
(GH44410)GroupBy.min()
andGroupBy.max()
now supports Numba execution with theengine
keyword (GH45428)read_csv()
now supportsdefaultdict
as adtype
parameter (GH41574)DataFrame.rolling()
andSeries.rolling()
now support astep
parameter with fixed-length windows (GH15354)Implemented a
bool
-dtypeIndex
, passing a bool-dtype array-like topd.Index
will now retainbool
dtype instead of casting toobject
(GH45061)Implemented a complex-dtype
Index
, passing a complex-dtype array-like topd.Index
will now retain complex dtype instead of casting toobject
(GH45845)Series
andDataFrame
withIntegerDtype
now supports bitwise operations (GH34463)Add
milliseconds
field support forDateOffset
(GH43371)DataFrame.where()
tries to maintain dtype ofDataFrame
if fill value can be cast without loss of precision (GH45582)DataFrame.reset_index()
now accepts anames
argument which renames the index names (GH6878)concat()
now raises whenlevels
is given butkeys
is None (GH46653)concat()
now raises whenlevels
contains duplicate values (GH46653)Added
numeric_only
argument toDataFrame.corr()
,DataFrame.corrwith()
,DataFrame.cov()
,DataFrame.idxmin()
,DataFrame.idxmax()
,DataFrameGroupBy.idxmin()
,DataFrameGroupBy.idxmax()
,GroupBy.var()
,GroupBy.std()
,GroupBy.sem()
, andDataFrameGroupBy.quantile()
(GH46560)A
errors.PerformanceWarning
is now thrown when usingstring[pyarrow]
dtype with methods that don’t dispatch topyarrow.compute
methods (GH42613, GH46725)Added
validate
argument toDataFrame.join()
(GH46622)A
errors.PerformanceWarning
is now thrown when usingstring[pyarrow]
dtype with methods that don’t dispatch topyarrow.compute
methods (GH42613)Added
numeric_only
argument toResampler.sum()
,Resampler.prod()
,Resampler.min()
,Resampler.max()
,Resampler.first()
, andResampler.last()
(GH46442)times
argument inExponentialMovingWindow
now acceptsnp.timedelta64
(GH47003)DataError
,SpecificationError
,SettingWithCopyError
,SettingWithCopyWarning
,NumExprClobberingError
,UndefinedVariableError
,IndexingError
,PyperclipException
,PyperclipWindowsException
,CSSWarning
,PossibleDataLossError
,ClosedFileError
,IncompatibilityWarning
,AttributeConflictWarning
,DatabaseError
,PossiblePrecisionLoss
,ValueLabelTypeMismatch
,InvalidColumnName
, andCategoricalConversionWarning
are now exposed inpandas.errors
(GH27656)Added
check_like
argument totesting.assert_series_equal()
(GH47247)Add support for
GroupBy.ohlc()
for extension array dtypes (GH37493)Allow reading compressed SAS files with
read_sas()
(e.g.,.sas7bdat.gz
files)pandas.read_html()
now supports extracting links from table cells (GH13141)DatetimeIndex.astype()
now supports casting timezone-naive indexes todatetime64[s]
,datetime64[ms]
, anddatetime64[us]
, and timezone-aware indexes to the correspondingdatetime64[unit, tzname]
dtypes (GH47579)Series
reducers (e.g.min
,max
,sum
,mean
) will now successfully operate when the dtype is numeric andnumeric_only=True
is provided; previously this would raise aNotImplementedError
(GH47500)RangeIndex.union()
now can return aRangeIndex
instead of aInt64Index
if the resulting values are equally spaced (GH47557, GH43885)DataFrame.compare()
now accepts an argumentresult_names
to allow the user to specify the result’s names of both left and right DataFrame which are being compared. This is by default'self'
and'other'
(GH44354)DataFrame.quantile()
gained amethod
argument that can accepttable
to evaluate multi-column quantiles (GH43881)Interval
now supports checking whether one interval is contained by another interval (GH46613)Added
copy
keyword toSeries.set_axis()
andDataFrame.set_axis()
to allow user to set axis on a new object without necessarily copying the underlying data (GH47932)The method
ExtensionArray.factorize()
acceptsuse_na_sentinel=False
for determining how null values are to be treated (GH46601)The
Dockerfile
now installs a dedicatedpandas-dev
virtual environment for pandas development instead of using thebase
environment (GH48427)
Notable bug fixes#
These are bug fixes that might have notable behavior changes.
Using dropna=True
with groupby
transforms#
A transform is an operation whose result has the same size as its input. When the
result is a DataFrame
or Series
, it is also required that the
index of the result matches that of the input. In pandas 1.4, using
DataFrameGroupBy.transform()
or SeriesGroupBy.transform()
with null
values in the groups and dropna=True
gave incorrect results. Demonstrated by the
examples below, the incorrect results either contained incorrect values, or the result
did not have the same index as the input.
In [26]: df = pd.DataFrame({'a': [1, 1, np.nan], 'b': [2, 3, 4]})
Old behavior:
In [3]: # Value in the last row should be np.nan
df.groupby('a', dropna=True).transform('sum')
Out[3]:
b
0 5
1 5
2 5
In [3]: # Should have one additional row with the value np.nan
df.groupby('a', dropna=True).transform(lambda x: x.sum())
Out[3]:
b
0 5
1 5
In [3]: # The value in the last row is np.nan interpreted as an integer
df.groupby('a', dropna=True).transform('ffill')
Out[3]:
b
0 2
1 3
2 -9223372036854775808
In [3]: # Should have one additional row with the value np.nan
df.groupby('a', dropna=True).transform(lambda x: x)
Out[3]:
b
0 2
1 3
New behavior:
In [27]: df.groupby('a', dropna=True).transform('sum')
Out[27]:
b
0 5.0
1 5.0
2 NaN
[3 rows x 1 columns]
In [28]: df.groupby('a', dropna=True).transform(lambda x: x.sum())
Out[28]:
b
0 5.0
1 5.0
2 NaN
[3 rows x 1 columns]
In [29]: df.groupby('a', dropna=True).transform('ffill')
Out[29]:
b
0 2.0
1 3.0
2 NaN
[3 rows x 1 columns]
In [30]: df.groupby('a', dropna=True).transform(lambda x: x)
Out[30]:
b
0 2.0
1 3.0
2 NaN
[3 rows x 1 columns]
Serializing tz-naive Timestamps with to_json() with iso_dates=True
#
DataFrame.to_json()
, Series.to_json()
, and Index.to_json()
would incorrectly localize DatetimeArrays/DatetimeIndexes with tz-naive Timestamps
to UTC. (GH38760)
Note that this patch does not fix the localization of tz-aware Timestamps to UTC upon serialization. (Related issue GH12997)
Old Behavior
In [31]: index = pd.date_range(
....: start='2020-12-28 00:00:00',
....: end='2020-12-28 02:00:00',
....: freq='1H',
....: )
....:
In [32]: a = pd.Series(
....: data=range(3),
....: index=index,
....: )
....:
In [4]: a.to_json(date_format='iso')
Out[4]: '{"2020-12-28T00:00:00.000Z":0,"2020-12-28T01:00:00.000Z":1,"2020-12-28T02:00:00.000Z":2}'
In [5]: pd.read_json(a.to_json(date_format='iso'), typ="series").index == a.index
Out[5]: array([False, False, False])
New Behavior
In [33]: a.to_json(date_format='iso')
Out[33]: '{"2020-12-28T00:00:00.000":0,"2020-12-28T01:00:00.000":1,"2020-12-28T02:00:00.000":2}'
# Roundtripping now works
In [34]: pd.read_json(a.to_json(date_format='iso'), typ="series").index == a.index
Out[34]: array([ True, True, True])
DataFrameGroupBy.value_counts with non-grouping categorical columns and observed=True
#
Calling DataFrameGroupBy.value_counts()
with observed=True
would incorrectly drop non-observed categories of non-grouping columns (GH46357).
In [6]: df = pd.DataFrame(["a", "b", "c"], dtype="category").iloc[0:2]
In [7]: df
Out[7]:
0
0 a
1 b
Old Behavior
In [8]: df.groupby(level=0, observed=True).value_counts()
Out[8]:
0 a 1
1 b 1
dtype: int64
New Behavior
In [9]: df.groupby(level=0, observed=True).value_counts()
Out[9]:
0 a 1
1 a 0
b 1
0 b 0
c 0
1 c 0
dtype: int64
Backwards incompatible API changes#
Increased minimum versions for dependencies#
Some minimum supported versions of dependencies were updated. If installed, we now require:
Package |
Minimum Version |
Required |
Changed |
---|---|---|---|
numpy |
1.20.3 |
X |
X |
mypy (dev) |
0.971 |
X |
|
beautifulsoup4 |
4.9.3 |
X |
|
blosc |
1.21.0 |
X |
|
bottleneck |
1.3.2 |
X |
|
fsspec |
2021.07.0 |
X |
|
hypothesis |
6.13.0 |
X |
|
gcsfs |
2021.07.0 |
X |
|
jinja2 |
3.0.0 |
X |
|
lxml |
4.6.3 |
X |
|
numba |
0.53.1 |
X |
|
numexpr |
2.7.3 |
X |
|
openpyxl |
3.0.7 |
X |
|
pandas-gbq |
0.15.0 |
X |
|
psycopg2 |
2.8.6 |
X |
|
pymysql |
1.0.2 |
X |
|
pyreadstat |
1.1.2 |
X |
|
pyxlsb |
1.0.8 |
X |
|
s3fs |
2021.08.0 |
X |
|
scipy |
1.7.1 |
X |
|
sqlalchemy |
1.4.16 |
X |
|
tabulate |
0.8.9 |
X |
|
xarray |
0.19.0 |
X |
|
xlsxwriter |
1.4.3 |
X |
For optional libraries the general recommendation is to use the latest version. The following table lists the lowest version per library that is currently being tested throughout the development of pandas. Optional libraries below the lowest tested version may still work, but are not considered supported.
Package |
Minimum Version |
Changed |
---|---|---|
beautifulsoup4 |
4.9.3 |
X |
blosc |
1.21.0 |
X |
bottleneck |
1.3.2 |
X |
brotlipy |
0.7.0 |
|
fastparquet |
0.4.0 |
|
fsspec |
2021.08.0 |
X |
html5lib |
1.1 |
|
hypothesis |
6.13.0 |
X |
gcsfs |
2021.08.0 |
X |
jinja2 |
3.0.0 |
X |
lxml |
4.6.3 |
X |
matplotlib |
3.3.2 |
|
numba |
0.53.1 |
X |
numexpr |
2.7.3 |
X |
odfpy |
1.4.1 |
|
openpyxl |
3.0.7 |
X |
pandas-gbq |
0.15.0 |
X |
psycopg2 |
2.8.6 |
X |
pyarrow |
1.0.1 |
|
pymysql |
1.0.2 |
X |
pyreadstat |
1.1.2 |
X |
pytables |
3.6.1 |
|
python-snappy |
0.6.0 |
|
pyxlsb |
1.0.8 |
X |
s3fs |
2021.08.0 |
X |
scipy |
1.7.1 |
X |
sqlalchemy |
1.4.16 |
X |
tabulate |
0.8.9 |
X |
tzdata |
2022a |
|
xarray |
0.19.0 |
X |
xlrd |
2.0.1 |
|
xlsxwriter |
1.4.3 |
X |
xlwt |
1.3.0 |
|
zstandard |
0.15.2 |
See Dependencies and Optional dependencies for more.
Other API changes#
BigQuery I/O methods
read_gbq()
andDataFrame.to_gbq()
default toauth_local_webserver = True
. Google has deprecated theauth_local_webserver = False
“out of band” (copy-paste) flow. Theauth_local_webserver = False
option is planned to stop working in October 2022. (GH46312)read_json()
now raisesFileNotFoundError
(previouslyValueError
) when input is a string ending in.json
,.json.gz
,.json.bz2
, etc. but no such file exists. (GH29102)Operations with
Timestamp
orTimedelta
that would previously raiseOverflowError
instead raiseOutOfBoundsDatetime
orOutOfBoundsTimedelta
where appropriate (GH47268)When
read_sas()
previously returnedNone
, it now returns an emptyDataFrame
(GH47410)DataFrame
constructor raises ifindex
orcolumns
arguments are sets (GH47215)
Deprecations#
Warning
In the next major version release, 2.0, several larger API changes are being considered without a formal deprecation such as
making the standard library zoneinfo the default timezone implementation instead of pytz
,
having the Index
support all data types instead of having multiple subclasses (CategoricalIndex
, Int64Index
, etc.), and more.
The changes under consideration are logged in this Github issue, and any
feedback or concerns are welcome.
Label-based integer slicing on a Series with an Int64Index or RangeIndex#
In a future version, integer slicing on a Series
with a Int64Index
or RangeIndex
will be treated as label-based, not positional. This will make the behavior consistent with other Series.__getitem__()
and Series.__setitem__()
behaviors (GH45162).
For example:
In [35]: ser = pd.Series([1, 2, 3, 4, 5], index=[2, 3, 5, 7, 11])
In the old behavior, ser[2:4]
treats the slice as positional:
Old behavior:
In [3]: ser[2:4]
Out[3]:
5 3
7 4
dtype: int64
In a future version, this will be treated as label-based:
Future behavior:
In [4]: ser.loc[2:4]
Out[4]:
2 1
3 2
dtype: int64
To retain the old behavior, use series.iloc[i:j]
. To get the future behavior,
use series.loc[i:j]
.
Slicing on a DataFrame
will not be affected.
ExcelWriter
attributes#
All attributes of ExcelWriter
were previously documented as not
public. However some third party Excel engines documented accessing
ExcelWriter.book
or ExcelWriter.sheets
, and users were utilizing these
and possibly other attributes. Previously these attributes were not safe to use;
e.g. modifications to ExcelWriter.book
would not update ExcelWriter.sheets
and conversely. In order to support this, pandas has made some attributes public
and improved their implementations so that they may now be safely used. (GH45572)
The following attributes are now public and considered safe to access.
book
check_extension
close
date_format
datetime_format
engine
if_sheet_exists
sheets
supported_extensions
The following attributes have been deprecated. They now raise a FutureWarning
when accessed and will be removed in a future version. Users should be aware
that their usage is considered unsafe, and can lead to unexpected results.
cur_sheet
handles
path
save
write_cells
See the documentation of ExcelWriter
for further details.
Using group_keys
with transformers in GroupBy.apply()
#
In previous versions of pandas, if it was inferred that the function passed to
GroupBy.apply()
was a transformer (i.e. the resulting index was equal to
the input index), the group_keys
argument of DataFrame.groupby()
and
Series.groupby()
was ignored and the group keys would never be added to
the index of the result. In the future, the group keys will be added to the index
when the user specifies group_keys=True
.
As group_keys=True
is the default value of DataFrame.groupby()
and
Series.groupby()
, not specifying group_keys
with a transformer will
raise a FutureWarning
. This can be silenced and the previous behavior
retained by specifying group_keys=False
.
Inplace operation when setting values with loc
and iloc
#
Most of the time setting values with DataFrame.iloc()
attempts to set values
inplace, only falling back to inserting a new array if necessary. There are
some cases where this rule is not followed, for example when setting an entire
column from an array with different dtype:
In [36]: df = pd.DataFrame({'price': [11.1, 12.2]}, index=['book1', 'book2'])
In [37]: original_prices = df['price']
In [38]: new_prices = np.array([98, 99])
Old behavior:
In [3]: df.iloc[:, 0] = new_prices
In [4]: df.iloc[:, 0]
Out[4]:
book1 98
book2 99
Name: price, dtype: int64
In [5]: original_prices
Out[5]:
book1 11.1
book2 12.2
Name: price, float: 64
This behavior is deprecated. In a future version, setting an entire column with iloc will attempt to operate inplace.
Future behavior:
In [3]: df.iloc[:, 0] = new_prices
In [4]: df.iloc[:, 0]
Out[4]:
book1 98.0
book2 99.0
Name: price, dtype: float64
In [5]: original_prices
Out[5]:
book1 98.0
book2 99.0
Name: price, dtype: float64
To get the old behavior, use DataFrame.__setitem__()
directly:
In [3]: df[df.columns[0]] = new_prices
In [4]: df.iloc[:, 0]
Out[4]
book1 98
book2 99
Name: price, dtype: int64
In [5]: original_prices
Out[5]:
book1 11.1
book2 12.2
Name: price, dtype: float64
To get the old behaviour when df.columns
is not unique and you want to
change a single column by index, you can use DataFrame.isetitem()
, which
has been added in pandas 1.5:
In [3]: df_with_duplicated_cols = pd.concat([df, df], axis='columns')
In [3]: df_with_duplicated_cols.isetitem(0, new_prices)
In [4]: df_with_duplicated_cols.iloc[:, 0]
Out[4]:
book1 98
book2 99
Name: price, dtype: int64
In [5]: original_prices
Out[5]:
book1 11.1
book2 12.2
Name: 0, dtype: float64
numeric_only
default value#
Across the DataFrame
, DataFrameGroupBy
, and Resampler
operations such as
min
, sum
, and idxmax
, the default
value of the numeric_only
argument, if it exists at all, was inconsistent.
Furthermore, operations with the default value None
can lead to surprising
results. (GH46560)
In [1]: df = pd.DataFrame({"a": [1, 2], "b": ["x", "y"]})
In [2]: # Reading the next line without knowing the contents of df, one would
# expect the result to contain the products for both columns a and b.
df[["a", "b"]].prod()
Out[2]:
a 2
dtype: int64
To avoid this behavior, the specifying the value numeric_only=None
has been
deprecated, and will be removed in a future version of pandas. In the future,
all operations with a numeric_only
argument will default to False
. Users
should either call the operation only with columns that can be operated on, or
specify numeric_only=True
to operate only on Boolean, integer, and float columns.
In order to support the transition to the new behavior, the following methods have
gained the numeric_only
argument.
DataFrame.rolling()
operationsDataFrame.expanding()
operationsDataFrame.ewm()
operations
Other Deprecations#
Deprecated the keyword
line_terminator
inDataFrame.to_csv()
andSeries.to_csv()
, uselineterminator
instead; this is for consistency withread_csv()
and the standard library ‘csv’ module (GH9568)Deprecated behavior of
SparseArray.astype()
,Series.astype()
, andDataFrame.astype()
withSparseDtype
when passing a non-sparsedtype
. In a future version, this will cast to that non-sparse dtype instead of wrapping it in aSparseDtype
(GH34457)Deprecated behavior of
DatetimeIndex.intersection()
andDatetimeIndex.symmetric_difference()
(union
behavior was already deprecated in version 1.3.0) with mixed time zones; in a future version both will be cast to UTC instead of object dtype (GH39328, GH45357)Deprecated
DataFrame.iteritems()
,Series.iteritems()
,HDFStore.iteritems()
in favor ofDataFrame.items()
,Series.items()
,HDFStore.items()
(GH45321)Deprecated
Series.is_monotonic()
andIndex.is_monotonic()
in favor ofSeries.is_monotonic_increasing()
andIndex.is_monotonic_increasing()
(GH45422, GH21335)Deprecated behavior of
DatetimeIndex.astype()
,TimedeltaIndex.astype()
,PeriodIndex.astype()
when converting to an integer dtype other thanint64
. In a future version, these will convert to exactly the specified dtype (instead of alwaysint64
) and will raise if the conversion overflows (GH45034)Deprecated the
__array_wrap__
method of DataFrame and Series, rely on standard numpy ufuncs instead (GH45451)Deprecated treating float-dtype data as wall-times when passed with a timezone to
Series
orDatetimeIndex
(GH45573)Deprecated the behavior of
Series.fillna()
andDataFrame.fillna()
withtimedelta64[ns]
dtype and incompatible fill value; in a future version this will cast to a common dtype (usually object) instead of raising, matching the behavior of other dtypes (GH45746)Deprecated the
warn
parameter ininfer_freq()
(GH45947)Deprecated allowing non-keyword arguments in
ExtensionArray.argsort()
(GH46134)Deprecated treating all-bool
object
-dtype columns as bool-like inDataFrame.any()
andDataFrame.all()
withbool_only=True
, explicitly cast to bool instead (GH46188)Deprecated behavior of method
DataFrame.quantile()
, attributenumeric_only
will default False. Including datetime/timedelta columns in the result (GH7308).Deprecated
Timedelta.freq
andTimedelta.is_populated
(GH46430)Deprecated
Timedelta.delta
(GH46476)Deprecated passing arguments as positional in
DataFrame.any()
andSeries.any()
(GH44802)Deprecated passing positional arguments to
DataFrame.pivot()
andpivot()
exceptdata
(GH30228)Deprecated the methods
DataFrame.mad()
,Series.mad()
, and the corresponding groupby methods (GH11787)Deprecated positional arguments to
Index.join()
except forother
, use keyword-only arguments instead of positional arguments (GH46518)Deprecated positional arguments to
StringMethods.rsplit()
andStringMethods.split()
except forpat
, use keyword-only arguments instead of positional arguments (GH47423)Deprecated indexing on a timezone-naive
DatetimeIndex
using a string representing a timezone-aware datetime (GH46903, GH36148)Deprecated allowing
unit="M"
orunit="Y"
inTimestamp
constructor with a non-round float value (GH47267)Deprecated the
display.column_space
global configuration option (GH7576)Deprecated the argument
na_sentinel
infactorize()
,Index.factorize()
, andExtensionArray.factorize()
; passuse_na_sentinel=True
instead to use the sentinel-1
for NaN values anduse_na_sentinel=False
instead ofna_sentinel=None
to encode NaN values (GH46910)Deprecated
DataFrameGroupBy.transform()
not aligning the result when the UDF returned DataFrame (GH45648)Clarified warning from
to_datetime()
when delimited dates can’t be parsed in accordance to specifieddayfirst
argument (GH46210)Emit warning from
to_datetime()
when delimited dates can’t be parsed in accordance to specifieddayfirst
argument even for dates where leading zero is omitted (e.g.31/1/2001
) (GH47880)Deprecated
Series
andResampler
reducers (e.g.min
,max
,sum
,mean
) raising aNotImplementedError
when the dtype is non-numric andnumeric_only=True
is provided; this will raise aTypeError
in a future version (GH47500)Deprecated
Series.rank()
returning an empty result when the dtype is non-numeric andnumeric_only=True
is provided; this will raise aTypeError
in a future version (GH47500)Deprecated argument
errors
forSeries.mask()
,Series.where()
,DataFrame.mask()
, andDataFrame.where()
aserrors
had no effect on this methods (GH47728)Deprecated arguments
*args
and**kwargs
inRolling
,Expanding
, andExponentialMovingWindow
ops. (GH47836)Deprecated the
inplace
keyword inCategorical.set_ordered()
,Categorical.as_ordered()
, andCategorical.as_unordered()
(GH37643)Deprecated setting a categorical’s categories with
cat.categories = ['a', 'b', 'c']
, useCategorical.rename_categories()
instead (GH37643)Deprecated unused arguments
encoding
andverbose
inSeries.to_excel()
andDataFrame.to_excel()
(GH47912)Deprecated the
inplace
keyword inDataFrame.set_axis()
andSeries.set_axis()
, useobj = obj.set_axis(..., copy=False)
instead (GH48130)Deprecated producing a single element when iterating over a
DataFrameGroupBy
or aSeriesGroupBy
that has been grouped by a list of length 1; A tuple of length one will be returned instead (GH42795)Fixed up warning message of deprecation of
MultiIndex.lesort_depth()
as public method, as the message previously referred toMultiIndex.is_lexsorted()
instead (GH38701)Deprecated the
sort_columns
argument inDataFrame.plot()
andSeries.plot()
(GH47563).Deprecated positional arguments for all but the first argument of
DataFrame.to_stata()
andread_stata()
, use keyword arguments instead (GH48128).Deprecated the
mangle_dupe_cols
argument inread_csv()
,read_fwf()
,read_table()
andread_excel()
. The argument was never implemented, and a new argument where the renaming pattern can be specified will be added instead (GH47718)Deprecated allowing
dtype='datetime64'
ordtype=np.datetime64
inSeries.astype()
, use “datetime64[ns]” instead (GH47844)
Performance improvements#
Performance improvement in
DataFrame.corrwith()
for column-wise (axis=0) Pearson and Spearman correlation when other is aSeries
(GH46174)Performance improvement in
GroupBy.transform()
for some user-defined DataFrame -> Series functions (GH45387)Performance improvement in
DataFrame.duplicated()
when subset consists of only one column (GH45236)Performance improvement in
GroupBy.diff()
(GH16706)Performance improvement in
GroupBy.transform()
when broadcasting values for user-defined functions (GH45708)Performance improvement in
GroupBy.transform()
for user-defined functions when only a single group exists (GH44977)Performance improvement in
GroupBy.apply()
when grouping on a non-unique unsorted index (GH46527)Performance improvement in
DataFrame.loc()
andSeries.loc()
for tuple-based indexing of aMultiIndex
(GH45681, GH46040, GH46330)Performance improvement in
GroupBy.var()
withddof
other than one (GH48152)Performance improvement in
DataFrame.to_records()
when the index is aMultiIndex
(GH47263)Performance improvement in
MultiIndex.values
when the MultiIndex contains levels of type DatetimeIndex, TimedeltaIndex or ExtensionDtypes (GH46288)Performance improvement in
merge()
when left and/or right are empty (GH45838)Performance improvement in
DataFrame.join()
when left and/or right are empty (GH46015)Performance improvement in
DataFrame.reindex()
andSeries.reindex()
when target is aMultiIndex
(GH46235)Performance improvement when setting values in a pyarrow backed string array (GH46400)
Performance improvement in
factorize()
(GH46109)Performance improvement in
DataFrame
andSeries
constructors for extension dtype scalars (GH45854)Performance improvement in
read_excel()
whennrows
argument provided (GH32727)Performance improvement in
Styler.to_excel()
when applying repeated CSS formats (GH47371)Performance improvement in
MultiIndex.is_monotonic_increasing()
(GH47458)Performance improvement in
BusinessHour
str
andrepr
(GH44764)Performance improvement in datetime arrays string formatting when one of the default strftime formats
"%Y-%m-%d %H:%M:%S"
or"%Y-%m-%d %H:%M:%S.%f"
is used. (GH44764)Performance improvement in
Series.to_sql()
andDataFrame.to_sql()
(SQLiteTable
) when processing time arrays. (GH44764)Performance improvement to
read_sas()
(GH47404)Performance improvement in
argmax
andargmin
forarrays.SparseArray
(GH34197)
Bug fixes#
Categorical#
Bug in
Categorical.view()
not accepting integer dtypes (GH25464)Bug in
CategoricalIndex.union()
when the index’s categories are integer-dtype and the index containsNaN
values incorrectly raising instead of casting tofloat64
(GH45362)Bug in
concat()
when concatenating two (or more) unorderedCategoricalIndex
variables, whose categories are permutations, yields incorrect index values (GH24845)
Datetimelike#
Bug in
DataFrame.quantile()
with datetime-like dtypes and no rows incorrectly returningfloat64
dtype instead of retaining datetime-like dtype (GH41544)Bug in
to_datetime()
with sequences ofnp.str_
objects incorrectly raising (GH32264)Bug in
Timestamp
construction when passing datetime components as positional arguments andtzinfo
as a keyword argument incorrectly raising (GH31929)Bug in
Index.astype()
when casting from object dtype totimedelta64[ns]
dtype incorrectly castingnp.datetime64("NaT")
values tonp.timedelta64("NaT")
instead of raising (GH45722)Bug in
SeriesGroupBy.value_counts()
index when passing categorical column (GH44324)Bug in
DatetimeIndex.tz_localize()
localizing to UTC failing to make a copy of the underlying data (GH46460)Bug in
DatetimeIndex.resolution()
incorrectly returning “day” instead of “nanosecond” for nanosecond-resolution indexes (GH46903)Bug in
Timestamp
with an integer or float value andunit="Y"
orunit="M"
giving slightly-wrong results (GH47266)Bug in
DatetimeArray
construction when passed anotherDatetimeArray
andfreq=None
incorrectly inferring the freq from the given array (GH47296)Bug in
to_datetime()
whereOutOfBoundsDatetime
would be thrown even iferrors=coerce
if there were more than 50 rows (GH45319)Bug when adding a
DateOffset
to aSeries
would not add thenanoseconds
field (GH47856)
Timedelta#
Bug in
astype_nansafe()
astype(“timedelta64[ns]”) fails when np.nan is included (GH45798)Bug in constructing a
Timedelta
with anp.timedelta64
object and aunit
sometimes silently overflowing and returning incorrect results instead of raisingOutOfBoundsTimedelta
(GH46827)Bug in constructing a
Timedelta
from a large integer or float withunit="W"
silently overflowing and returning incorrect results instead of raisingOutOfBoundsTimedelta
(GH47268)
Time Zones#
Numeric#
Bug in operations with array-likes with
dtype="boolean"
andNA
incorrectly altering the array in-place (GH45421)Bug in arithmetic operations with nullable types without
NA
values not matching the same operation with non-nullable types (GH48223)Bug in
floordiv
when dividing byIntegerDtype
0
would return0
instead ofinf
(GH48223)Bug in division,
pow
andmod
operations on array-likes withdtype="boolean"
not being like theirnp.bool_
counterparts (GH46063)Bug in multiplying a
Series
withIntegerDtype
orFloatingDtype
by an array-like withtimedelta64[ns]
dtype incorrectly raising (GH45622)Bug in
mean()
where the optional dependencybottleneck
causes precision loss linear in the length of the array.bottleneck
has been disabled formean()
improving the loss to log-linear but may result in a performance decrease. (GH42878)
Conversion#
Bug in
DataFrame.astype()
not preserving subclasses (GH40810)Bug in constructing a
Series
from a float-containing list or a floating-dtype ndarray-like (e.g.dask.Array
) and an integer dtype raising instead of casting like we would with annp.ndarray
(GH40110)Bug in
Float64Index.astype()
to unsigned integer dtype incorrectly casting tonp.int64
dtype (GH45309)Bug in
Series.astype()
andDataFrame.astype()
from floating dtype to unsigned integer dtype failing to raise in the presence of negative values (GH45151)Bug in
array()
withFloatingDtype
and values containing float-castable strings incorrectly raising (GH45424)Bug when comparing string and datetime64ns objects causing
OverflowError
exception. (GH45506)Bug in metaclass of generic abstract dtypes causing
DataFrame.apply()
andSeries.apply()
to raise for the built-in functiontype
(GH46684)Bug in
DataFrame.to_records()
returning inconsistent numpy types if the index was aMultiIndex
(GH47263)Bug in
DataFrame.to_dict()
fororient="list"
ororient="index"
was not returning native types (GH46751)Bug in
DataFrame.apply()
that returns aDataFrame
instead of aSeries
when applied to an emptyDataFrame
andaxis=1
(GH39111)Bug when inferring the dtype from an iterable that is not a NumPy
ndarray
consisting of all NumPy unsigned integer scalars did not result in an unsigned integer dtype (GH47294)Bug in
DataFrame.eval()
when pandas objects (e.g.'Timestamp'
) were column names (GH44603)
Strings#
Bug in
str.startswith()
andstr.endswith()
when using other series as parameter _pat_. Now raisesTypeError
(GH3485)Bug in
Series.str.zfill()
when strings contain leading signs, padding ‘0’ before the sign character rather than after asstr.zfill
from standard library (GH20868)
Interval#
Bug in
IntervalArray.__setitem__()
when settingnp.nan
into an integer-backed array raisingValueError
instead ofTypeError
(GH45484)Bug in
IntervalDtype
when using datetime64[ns, tz] as a dtype string (GH46999)
Indexing#
Bug in
DataFrame.iloc()
where indexing a single row on aDataFrame
with a single ExtensionDtype column gave a copy instead of a view on the underlying data (GH45241)Bug in
DataFrame.__getitem__()
returning copy whenDataFrame
has duplicated columns even if a unique column is selected (GH45316, GH41062)Bug in
Series.align()
does not createMultiIndex
with union of levels when both MultiIndexes intersections are identical (GH45224)Bug in setting a NA value (
None
ornp.nan
) into aSeries
with int-basedIntervalDtype
incorrectly casting to object dtype instead of a float-basedIntervalDtype
(GH45568)Bug in indexing setting values into an
ExtensionDtype
column withdf.iloc[:, i] = values
withvalues
having the same dtype asdf.iloc[:, i]
incorrectly inserting a new array instead of setting in-place (GH33457)Bug in
Series.__setitem__()
with a non-integerIndex
when using an integer key to set a value that cannot be set inplace where aValueError
was raised instead of casting to a common dtype (GH45070)Bug in
DataFrame.loc()
not castingNone
toNA
when setting value as a list intoDataFrame
(GH47987)Bug in
Series.__setitem__()
when setting incompatible values into aPeriodDtype
orIntervalDtype
Series
raising when indexing with a boolean mask but coercing when indexing with otherwise-equivalent indexers; these now consistently coerce, along withSeries.mask()
andSeries.where()
(GH45768)Bug in
DataFrame.where()
with multiple columns with datetime-like dtypes failing to downcast results consistent with other dtypes (GH45837)Bug in
isin()
upcasting tofloat64
with unsigned integer dtype and list-like argument without a dtype (GH46485)Bug in
Series.loc.__setitem__()
andSeries.loc.__getitem__()
not raising when using multiple keys without using aMultiIndex
(GH13831)Bug in
Index.reindex()
raisingAssertionError
whenlevel
was specified but noMultiIndex
was given; level is ignored now (GH35132)Bug when setting a value too large for a
Series
dtype failing to coerce to a common type (GH26049, GH32878)Bug in
loc.__setitem__()
treatingrange
keys as positional instead of label-based (GH45479)Bug in
DataFrame.__setitem__()
casting extension array dtypes to object when setting with a scalar key andDataFrame
as value (GH46896)Bug in
Series.__setitem__()
when setting a scalar to a nullable pandas dtype would not raise aTypeError
if the scalar could not be cast (losslessly) to the nullable type (GH45404)Bug in
Series.__setitem__()
when settingboolean
dtype values containingNA
incorrectly raising instead of casting toboolean
dtype (GH45462)Bug in
Series.loc()
raising with boolean indexer containingNA
whenIndex
did not match (GH46551)Bug in
Series.__setitem__()
where settingNA
into a numeric-dtypeSeries
would incorrectly upcast to object-dtype rather than treating the value asnp.nan
(GH44199)Bug in
DataFrame.loc()
when setting values to a column and right hand side is a dictionary (GH47216)Bug in
Series.__setitem__()
withdatetime64[ns]
dtype, an all-False
boolean mask, and an incompatible value incorrectly casting toobject
instead of retainingdatetime64[ns]
dtype (GH45967)Bug in
Index.__getitem__()
raisingValueError
when indexer is from boolean dtype withNA
(GH45806)Bug in
Series.__setitem__()
losing precision when enlargingSeries
with scalar (GH32346)Bug in
Series.mask()
withinplace=True
or setting values with a boolean mask with small integer dtypes incorrectly raising (GH45750)Bug in
DataFrame.mask()
withinplace=True
andExtensionDtype
columns incorrectly raising (GH45577)Bug in getting a column from a DataFrame with an object-dtype row index with datetime-like values: the resulting Series now preserves the exact object-dtype Index from the parent DataFrame (GH42950)
Bug in
DataFrame.__getattribute__()
raisingAttributeError
if columns have"string"
dtype (GH46185)Bug in
DataFrame.compare()
returning allNaN
column when comparing extension array dtype and numpy dtype (GH44014)Bug in
DataFrame.where()
setting wrong values with"boolean"
mask for numpy dtype (GH44014)Bug in indexing on a
DatetimeIndex
with anp.str_
key incorrectly raising (GH45580)Bug in
CategoricalIndex.get_indexer()
when index containsNaN
values, resulting in elements that are in target but not present in the index to be mapped to the index of the NaN element, instead of -1 (GH45361)Bug in setting large integer values into
Series
withfloat32
orfloat16
dtype incorrectly altering these values instead of coercing tofloat64
dtype (GH45844)Bug in
Series.asof()
andDataFrame.asof()
incorrectly casting bool-dtype results tofloat64
dtype (GH16063)Bug in
NDFrame.xs()
,DataFrame.iterrows()
,DataFrame.loc()
andDataFrame.iloc()
not always propagating metadata (GH28283)Bug in
DataFrame.sum()
min_count changes dtype if input contains NaNs (GH46947)Bug in
IntervalTree
that lead to an infinite recursion. (GH46658)Bug in
PeriodIndex
raisingAttributeError
when indexing onNA
, rather than puttingNaT
in its place. (GH46673)Bug in
DataFrame.at()
would allow the modification of multiple columns (GH48296)
Missing#
Bug in
Series.fillna()
andDataFrame.fillna()
withdowncast
keyword not being respected in some cases where there are no NA values present (GH45423)Bug in
Series.fillna()
andDataFrame.fillna()
withIntervalDtype
and incompatible value raising instead of casting to a common (usually object) dtype (GH45796)Bug in
Series.map()
not respectingna_action
argument if mapper is adict
orSeries
(GH47527)Bug in
DataFrame.interpolate()
with object-dtype column not returning a copy withinplace=False
(GH45791)Bug in
DataFrame.dropna()
allows to set bothhow
andthresh
incompatible arguments (GH46575)Bug in
DataFrame.fillna()
ignoredaxis
whenDataFrame
is single block (GH47713)
MultiIndex#
Bug in
DataFrame.loc()
returning empty result when slicing aMultiIndex
with a negative step size and non-null start/stop values (GH46156)Bug in
DataFrame.loc()
raising when slicing aMultiIndex
with a negative step size other than -1 (GH46156)Bug in
DataFrame.loc()
raising when slicing aMultiIndex
with a negative step size and slicing a non-int labeled index level (GH46156)Bug in
Series.to_numpy()
where multiindexed Series could not be converted to numpy arrays when anna_value
was supplied (GH45774)Bug in
MultiIndex.equals
not commutative when only one side has extension array dtype (GH46026)Bug in
MultiIndex.from_tuples()
cannot construct Index of empty tuples (GH45608)
I/O#
Bug in
DataFrame.to_stata()
where no error is raised if theDataFrame
contains-np.inf
(GH45350)Bug in
read_excel()
results in an infinite loop with certainskiprows
callables (GH45585)Bug in
DataFrame.info()
where a new line at the end of the output is omitted when called on an emptyDataFrame
(GH45494)Bug in
read_csv()
not recognizing line break foron_bad_lines="warn"
forengine="c"
(GH41710)Bug in
DataFrame.to_csv()
not respectingfloat_format
forFloat64
dtype (GH45991)Bug in
read_csv()
not respecting a specified converter to index columns in all cases (GH40589)Bug in
read_csv()
interpreting second row asIndex
names even whenindex_col=False
(GH46569)Bug in
read_parquet()
whenengine="pyarrow"
which caused partial write to disk when column of unsupported datatype was passed (GH44914)Bug in
DataFrame.to_excel()
andExcelWriter
would raise when writing an empty DataFrame to a.ods
file (GH45793)Bug in
read_csv()
ignoring non-existing header row forengine="python"
(GH47400)Bug in
read_excel()
raising uncontrolledIndexError
whenheader
references non-existing rows (GH43143)Bug in
read_html()
where elements surrounding<br>
were joined without a space between them (GH29528)Bug in
read_csv()
when data is longer than header leading to issues with callables inusecols
expecting strings (GH46997)Bug in Parquet roundtrip for Interval dtype with
datetime64[ns]
subtype (GH45881)Bug in
read_excel()
when reading a.ods
file with newlines between xml elements (GH45598)Bug in
read_parquet()
whenengine="fastparquet"
where the file was not closed on error (GH46555)to_html()
now excludes theborder
attribute from<table>
elements whenborder
keyword is set toFalse
.Bug in
read_sas()
with certain types of compressed SAS7BDAT files (GH35545)Bug in
read_excel()
not forward fillingMultiIndex
when no names were given (GH47487)Bug in
read_sas()
returnedNone
rather than an empty DataFrame for SAS7BDAT files with zero rows (GH18198)Bug in
DataFrame.to_string()
using wrong missing value with extension arrays inMultiIndex
(GH47986)Bug in
StataWriter
where value labels were always written with default encoding (GH46750)Bug in
StataWriterUTF8
where some valid characters were removed from variable names (GH47276)Bug in
DataFrame.to_excel()
when writing an empty dataframe withMultiIndex
(GH19543)Bug in
read_sas()
with RLE-compressed SAS7BDAT files that contain 0x40 control bytes (GH31243)Bug in
read_sas()
that scrambled column names (GH31243)Bug in
read_sas()
with RLE-compressed SAS7BDAT files that contain 0x00 control bytes (GH47099)Bug in
read_parquet()
withuse_nullable_dtypes=True
wherefloat64
dtype was returned instead of nullableFloat64
dtype (GH45694)Bug in
DataFrame.to_json()
wherePeriodDtype
would not make the serialization roundtrip when read back withread_json()
(GH44720)Bug in
read_xml()
when reading XML files with Chinese character tags and would raiseXMLSyntaxError
(GH47902)
Period#
Bug in subtraction of
Period
fromPeriodArray
returning wrong results (GH45999)Bug in
Period.strftime()
andPeriodIndex.strftime()
, directives%l
and%u
were giving wrong results (GH46252)Bug in inferring an incorrect
freq
when passing a string toPeriod
microseconds that are a multiple of 1000 (GH46811)Bug in constructing a
Period
from aTimestamp
ornp.datetime64
object with non-zero nanoseconds andfreq="ns"
incorrectly truncating the nanoseconds (GH46811)Bug in adding
np.timedelta64("NaT", "ns")
to aPeriod
with a timedelta-like freq incorrectly raisingIncompatibleFrequency
instead of returningNaT
(GH47196)Bug in adding an array of integers to an array with
PeriodDtype
giving incorrect results whendtype.freq.n > 1
(GH47209)Bug in subtracting a
Period
from an array withPeriodDtype
returning incorrect results instead of raisingOverflowError
when the operation overflows (GH47538)
Plotting#
Bug in
DataFrame.plot.barh()
that prevented labeling the x-axis andxlabel
updating the y-axis label (GH45144)Bug in
DataFrame.plot.box()
that prevented labeling the x-axis (GH45463)Bug in
DataFrame.boxplot()
that prevented passing inxlabel
andylabel
(GH45463)Bug in
DataFrame.boxplot()
that prevented specifyingvert=False
(GH36918)Bug in
DataFrame.plot.scatter()
that prevented specifyingnorm
(GH45809)Fix showing “None” as ylabel in
Series.plot()
when not setting ylabel (GH46129)Bug in
DataFrame.plot()
that led to xticks and vertical grids being improperly placed when plotting a quarterly series (GH47602)Bug in
DataFrame.plot()
that prevented setting y-axis label, limits and ticks for a secondary y-axis (GH47753)
Groupby/resample/rolling#
Bug in
DataFrame.resample()
ignoringclosed="right"
onTimedeltaIndex
(GH45414)Bug in
DataFrameGroupBy.transform()
fails whenfunc="size"
and the input DataFrame has multiple columns (GH27469)Bug in
DataFrameGroupBy.size()
andDataFrameGroupBy.transform()
withfunc="size"
produced incorrect results whenaxis=1
(GH45715)Bug in
ExponentialMovingWindow.mean()
withaxis=1
andengine='numba'
when theDataFrame
has more columns than rows (GH46086)Bug when using
engine="numba"
would return the same jitted function when modifyingengine_kwargs
(GH46086)Bug in
DataFrameGroupBy.transform()
fails whenaxis=1
andfunc
is"first"
or"last"
(GH45986)Bug in
DataFrameGroupBy.cumsum()
withskipna=False
giving incorrect results (GH46216)Bug in
GroupBy.sum()
,GroupBy.prod()
andGroupBy.cumsum()
with integer dtypes losing precision (GH37493)Bug in
GroupBy.cumsum()
withtimedelta64[ns]
dtype failing to recognizeNaT
as a null value (GH46216)Bug in
GroupBy.cumsum()
with integer dtypes causing overflows when sum was bigger than maximum of dtype (GH37493)Bug in
GroupBy.cummin()
andGroupBy.cummax()
with nullable dtypes incorrectly altering the original data in place (GH46220)Bug in
DataFrame.groupby()
raising error whenNone
is in first level ofMultiIndex
(GH47348)Bug in
GroupBy.cummax()
withint64
dtype with leading value being the smallest possible int64 (GH46382)Bug in
GroupBy.cumprod()
NaN
influences calculation in different columns withskipna=False
(GH48064)Bug in
GroupBy.max()
with empty groups anduint64
dtype incorrectly raisingRuntimeError
(GH46408)Bug in
GroupBy.apply()
would fail whenfunc
was a string and args or kwargs were supplied (GH46479)Bug in
SeriesGroupBy.apply()
would incorrectly name its result when there was a unique group (GH46369)Bug in
Rolling.sum()
andRolling.mean()
would give incorrect result with window of same values (GH42064, GH46431)Bug in
Rolling.var()
andRolling.std()
would give non-zero result with window of same values (GH42064)Bug in
Rolling.skew()
andRolling.kurt()
would give NaN with window of same values (GH30993)Bug in
Rolling.var()
would segfault calculating weighted variance when window size was larger than data size (GH46760)Bug in
Grouper.__repr__()
wheredropna
was not included. Now it is (GH46754)Bug in
DataFrame.rolling()
gives ValueError when center=True, axis=1 and win_type is specified (GH46135)Bug in
DataFrameGroupBy.describe()
andSeriesGroupBy.describe()
produces inconsistent results for empty datasets (GH41575)Bug in
DataFrame.resample()
reduction methods when used withon
would attempt to aggregate the provided column (GH47079)Bug in
DataFrame.groupby()
andSeries.groupby()
would not respectdropna=False
when the input DataFrame/Series had a NaN values in aMultiIndex
(GH46783)Bug in
DataFrameGroupBy.resample()
raisesKeyError
when getting the result from a key list which misses the resample key (GH47362)Bug in
DataFrame.groupby()
would lose index columns when the DataFrame is empty for transforms, like fillna (GH47787)Bug in
DataFrame.groupby()
andSeries.groupby()
withdropna=False
andsort=False
would put any null groups at the end instead the order that they are encountered (GH46584)
Reshaping#
Bug in
concat()
between aSeries
with integer dtype and another withCategoricalDtype
with integer categories and containingNaN
values casting to object dtype instead offloat64
(GH45359)Bug in
get_dummies()
that selected object and categorical dtypes but not string (GH44965)Bug in
DataFrame.align()
when aligning aMultiIndex
to aSeries
with anotherMultiIndex
(GH46001)Bug in concatenation with
IntegerDtype
, orFloatingDtype
arrays where the resulting dtype did not mirror the behavior of the non-nullable dtypes (GH46379)Bug in
concat()
losing dtype of columns whenjoin="outer"
andsort=True
(GH47329)Bug in
concat()
not sorting the column names whenNone
is included (GH47331)Bug in
concat()
with identical key leads to error when indexingMultiIndex
(GH46519)Bug in
pivot_table()
raisingTypeError
whendropna=True
and aggregation column has extension array dtype (GH47477)Bug in
merge()
raising error forhow="cross"
when usingFIPS
mode in ssl library (GH48024)Bug in
DataFrame.join()
with a list when using suffixes to join DataFrames with duplicate column names (GH46396)Bug in
DataFrame.pivot_table()
withsort=False
results in sorted index (GH17041)Bug in
concat()
whenaxis=1
andsort=False
where the resulting Index was aInt64Index
instead of aRangeIndex
(GH46675)Bug in
wide_to_long()
raises whenstubnames
is missing in columns andi
contains string dtype column (GH46044)Bug in
DataFrame.join()
with categorical index results in unexpected reordering (GH47812)
Sparse#
Bug in
Series.where()
andDataFrame.where()
withSparseDtype
failing to retain the array’sfill_value
(GH45691)Bug in
SparseArray.unique()
fails to keep original elements order (GH47809)
ExtensionArray#
Bug in
IntegerArray.searchsorted()
andFloatingArray.searchsorted()
returning inconsistent results when acting onnp.nan
(GH45255)
Styler#
Bug when attempting to apply styling functions to an empty DataFrame subset (GH45313)
Bug in
CSSToExcelConverter
leading toTypeError
when border color provided without border style forxlsxwriter
engine (GH42276)Bug in
Styler.set_sticky()
leading to white text on white background in dark mode (GH46984)Bug in
Styler.to_latex()
causingUnboundLocalError
whenclines="all;data"
and theDataFrame
has no rows. (GH47203)Bug in
Styler.to_excel()
when usingvertical-align: middle;
withxlsxwriter
engine (GH30107)Bug when applying styles to a DataFrame with boolean column labels (GH47838)
Metadata#
Fixed metadata propagation in
DataFrame.melt()
(GH28283)Fixed metadata propagation in
DataFrame.explode()
(GH28283)
Other#
Bug in
assert_index_equal()
withnames=True
andcheck_order=False
not checking names (GH47328)
Contributors#
A total of 271 people contributed patches to this release. People with a “+” by their names contributed a patch for the first time.
Aadharsh Acharya +
Aadharsh-Acharya +
Aadhi Manivannan +
Adam Bowden
Aditya Agarwal +
Ahmed Ibrahim +
Alastair Porter +
Alex Povel +
Alex-Blade
Alexandra Sciocchetti +
AlonMenczer +
Andras Deak +
Andrew Hawyrluk
Andy Grigg +
Aneta Kahleová +
Anthony Givans +
Anton Shevtsov +
B. J. Potter +
BarkotBeyene +
Ben Beasley +
Ben Wozniak +
Bernhard Wagner +
Boris Rumyantsev
Brian Gollop +
CCXXXI +
Chandrasekaran Anirudh Bhardwaj +
Charles Blackmon-Luca +
Chris Moradi +
ChrisAlbertsen +
Compro Prasad +
DaPy15
Damian Barabonkov +
Daniel I +
Daniel Isaac +
Daniel Schmidt
Danil Iashchenko +
Dare Adewumi
Dennis Chukwunta +
Dennis J. Gray +
Derek Sharp +
Dhruv Samdani +
Dimitra Karadima +
Dmitry Savostyanov +
Dmytro Litvinov +
Do Young Kim +
Dries Schaumont +
Edward Huang +
Eirik +
Ekaterina +
Eli Dourado +
Ezra Brauner +
Fabian Gabel +
FactorizeD +
Fangchen Li
Francesco Romandini +
Greg Gandenberger +
Guo Ci +
Hiroaki Ogasawara
Hood Chatham +
Ian Alexander Joiner +
Irv Lustig
Ivan Ng +
JHM Darbyshire
JHM Darbyshire (MBP)
JHM Darbyshire (iMac)
JMBurley
Jack Goldsmith +
James Freeman +
James Lamb
James Moro +
Janosh Riebesell
Jarrod Millman
Jason Jia +
Jeff Reback
Jeremy Tuloup +
Johannes Mueller
John Bencina +
John Mantios +
John Zangwill
Jon Bramley +
Jonas Haag
Jordan Hicks
Joris Van den Bossche
Jose Ortiz +
JosephParampathu +
José Duarte
Julian Steger +
Kai Priester +
Kapil E. Iyer +
Karthik Velayutham +
Kashif Khan
Kazuki Igeta +
Kevin Jan Anker +
Kevin Sheppard
Khor Chean Wei
Kian Eliasi
Kian S +
Kim, KwonHyun +
Kinza-Raza +
Konjeti Maruthi +
Leonardus Chen
Linxiao Francis Cong +
Loïc Estève
LucasG0 +
Lucy Jiménez +
Luis Pinto
Luke Manley
Marc Garcia
Marco Edward Gorelli
Marco Gorelli
MarcoGorelli
Margarete Dippel +
Mariam-ke +
Martin Fleischmann
Marvin John Walter +
Marvin Walter +
Mateusz
Matilda M +
Matthew Roeschke
Matthias Bussonnier
MeeseeksMachine
Mehgarg +
Melissa Weber Mendonça +
Michael Milton +
Michael Wang
Mike McCarty +
Miloni Atal +
Mitlasóczki Bence +
Moritz Schreiber +
Morten Canth Hels +
Nick Crews +
NickFillot +
Nicolas Hug +
Nima Sarang
Noa Tamir +
Pandas Development Team
Parfait Gasana
Parthi +
Partho +
Patrick Hoefler
Peter
Peter Hawkins +
Philipp A
Philipp Schaefer +
Pierrot +
Pratik Patel +
Prithvijit
Purna Chandra Mansingh +
Radoslaw Lemiec +
RaphSku +
Reinert Huseby Karlsen +
Richard Shadrach
Richard Shadrach +
Robbie Palmer
Robert de Vries
Roger +
Roger Murray +
Ruizhe Deng +
SELEE +
Sachin Yadav +
Saiwing Yeung +
Sam Rao +
Sandro Casagrande +
Sebastiaan Vermeulen +
Shaghayegh +
Shantanu +
Shashank Shet +
Shawn Zhong +
Shuangchi He +
Simon Hawkins
Simon Knott +
Solomon Song +
Somtochi Umeh +
Stefan Krawczyk +
Stefanie Molin
Steffen Rehberg
Steven Bamford +
Steven Rotondo +
Steven Schaerer
Sylvain MARIE +
Sylvain Marié
Tarun Raghunandan Kaushik +
Taylor Packard +
Terji Petersen
Thierry Moisan
Thomas Grainger
Thomas Hunter +
Thomas Li
Tim McFarland +
Tim Swast
Tim Yang +
Tobias Pitters
Tom Aarsen +
Tom Augspurger
Torsten Wörtwein
TraverseTowner +
Tyler Reddy
Valentin Iovene
Varun Sharma +
Vasily Litvinov
Venaturum
Vinicius Akira Imaizumi +
Vladimir Fokow +
Wenjun Si
Will Lachance +
William Andrea
Wolfgang F. Riedl +
Xingrong Chen
Yago González
Yikun Jiang +
Yuanhao Geng
Yuval +
Zero
Zhengfei Wang +
abmyii
alexondor +
alm
andjhall +
anilbey +
arnaudlegout +
asv-bot +
ateki +
auderson +
bherwerth +
bicarlsen +
carbonleakage +
charles +
charlogazzo +
code-review-doctor +
dataxerik +
deponovo
dimitra-karadima +
dospix +
ehallam +
ehsan shirvanian +
ember91 +
eshirvana
fractionalhare +
gaotian98 +
gesoos
github-actions[bot]
gunghub +
hasan-yaman
iansheng +
iasoon +
jbrockmendel
joshuabello2550 +
jyuv +
kouya takahashi +
mariana-LJ +
matt +
mattB1989 +
nealxm +
partev
poloso +
realead
roib20 +
rtpsw
ryangilmour +
shourya5 +
srotondo +
stanleycai95 +
staticdev +
tehunter +
theidexisted +
tobias.pitters +
uncjackg +
vernetya
wany-oh +
wfr +
z3c0 +