What’s new in 0.24.0 (January 25, 2019)#
Warning
The 0.24.x series of releases will be the last to support Python 2. Future feature releases will support Python 3 only. See Dropping Python 2.7 for more details.
This is a major release from 0.23.4 and includes a number of API changes, new features, enhancements, and performance improvements along with a large number of bug fixes.
Highlights include:
Check the API Changes and deprecations before updating.
These are the changes in pandas 0.24.0. See Release notes for a full changelog including other versions of pandas.
Enhancements#
Optional integer NA support#
pandas has gained the ability to hold integer dtypes with missing values. This long requested feature is enabled through the use of extension types.
Note
IntegerArray is currently experimental. Its API or implementation may change without warning.
We can construct a Series with the specified dtype. The dtype string Int64 is a pandas ExtensionDtype. Specifying a list or array using the traditional missing value
marker of np.nan will infer to integer dtype. The display of the Series will also use the NaN to indicate missing values in string outputs. (GH 20700, GH 20747, GH 22441, GH 21789, GH 22346)
In [1]: s = pd.Series([1, 2, np.nan], dtype='Int64')
In [2]: s
Out[2]: 
0       1
1       2
2    <NA>
Length: 3, dtype: Int64
Operations on these dtypes will propagate NaN as other pandas operations.
# arithmetic
In [3]: s + 1
Out[3]: 
0       2
1       3
2    <NA>
Length: 3, dtype: Int64
# comparison
In [4]: s == 1
Out[4]: 
0     True
1    False
2     <NA>
Length: 3, dtype: boolean
# indexing
In [5]: s.iloc[1:3]
Out[5]: 
1       2
2    <NA>
Length: 2, dtype: Int64
# operate with other dtypes
In [6]: s + s.iloc[1:3].astype('Int8')
Out[6]: 
0    <NA>
1       4
2    <NA>
Length: 3, dtype: Int64
# coerce when needed
In [7]: s + 0.01
Out[7]: 
0    1.01
1    2.01
2    <NA>
Length: 3, dtype: Float64
These dtypes can operate as part of a DataFrame.
In [8]: df = pd.DataFrame({'A': s, 'B': [1, 1, 3], 'C': list('aab')})
In [9]: df
Out[9]: 
      A  B  C
0     1  1  a
1     2  1  a
2  <NA>  3  b
[3 rows x 3 columns]
In [10]: df.dtypes
Out[10]: 
A     Int64
B     int64
C    object
Length: 3, dtype: object
These dtypes can be merged, reshaped, and casted.
In [11]: pd.concat([df[['A']], df[['B', 'C']]], axis=1).dtypes
Out[11]: 
A     Int64
B     int64
C    object
Length: 3, dtype: object
In [12]: df['A'].astype(float)
Out[12]: 
0    1.0
1    2.0
2    NaN
Name: A, Length: 3, dtype: float64
Reduction and groupby operations such as sum work.
In [13]: df.sum()
Out[13]: 
A      3
B      5
C    aab
Length: 3, dtype: object
In [14]: df.groupby('B').A.sum()
Out[14]: 
B
1    3
3    0
Name: A, Length: 2, dtype: Int64
Warning
The Integer NA support currently uses the capitalized dtype version, e.g. Int8 as compared to the traditional int8. This may be changed at a future date.
See Nullable integer data type for more.
Accessing the values in a Series or Index#
Series.array and Index.array have been added for extracting the array backing a
Series or Index. (GH 19954, GH 23623)
In [15]: idx = pd.period_range('2000', periods=4)
In [16]: idx.array
Out[16]: 
<PeriodArray>
['2000-01-01', '2000-01-02', '2000-01-03', '2000-01-04']
Length: 4, dtype: period[D]
In [17]: pd.Series(idx).array
Out[17]: 
<PeriodArray>
['2000-01-01', '2000-01-02', '2000-01-03', '2000-01-04']
Length: 4, dtype: period[D]
Historically, this would have been done with series.values, but with
.values it was unclear whether the returned value would be the actual array,
some transformation of it, or one of pandas custom arrays (like
Categorical). For example, with PeriodIndex, .values generates
a new ndarray of period objects each time.
In [18]: idx.values
Out[18]: 
array([Period('2000-01-01', 'D'), Period('2000-01-02', 'D'),
       Period('2000-01-03', 'D'), Period('2000-01-04', 'D')], dtype=object)
In [19]: id(idx.values)
Out[19]: 140262501106384
In [20]: id(idx.values)
Out[20]: 140262443469776
If you need an actual NumPy array, use Series.to_numpy() or Index.to_numpy().
In [21]: idx.to_numpy()
Out[21]: 
array([Period('2000-01-01', 'D'), Period('2000-01-02', 'D'),
       Period('2000-01-03', 'D'), Period('2000-01-04', 'D')], dtype=object)
In [22]: pd.Series(idx).to_numpy()
Out[22]: 
array([Period('2000-01-01', 'D'), Period('2000-01-02', 'D'),
       Period('2000-01-03', 'D'), Period('2000-01-04', 'D')], dtype=object)
For Series and Indexes backed by normal NumPy arrays, Series.array will return a
new arrays.PandasArray, which is a thin (no-copy) wrapper around a
numpy.ndarray. PandasArray isn’t especially useful on its own,
but it does provide the same interface as any extension array defined in pandas or by
a third-party library.
In [23]: ser = pd.Series([1, 2, 3])
In [24]: ser.array
Out[24]: 
<NumpyExtensionArray>
[1, 2, 3]
Length: 3, dtype: int64
In [25]: ser.to_numpy()
Out[25]: array([1, 2, 3])
We haven’t removed or deprecated Series.values or DataFrame.values, but we
highly recommend and using .array or .to_numpy() instead.
See Dtypes and Attributes and Underlying Data for more.
pandas.array: a new top-level method for creating arrays#
A new top-level method array() has been added for creating 1-dimensional arrays (GH 22860).
This can be used to create any extension array, including
extension arrays registered by 3rd party libraries.
See the dtypes docs for more on extension arrays.
In [26]: pd.array([1, 2, np.nan], dtype='Int64')
Out[26]: 
<IntegerArray>
[1, 2, <NA>]
Length: 3, dtype: Int64
In [27]: pd.array(['a', 'b', 'c'], dtype='category')
Out[27]: 
['a', 'b', 'c']
Categories (3, object): ['a', 'b', 'c']
Passing data for which there isn’t dedicated extension type (e.g. float, integer, etc.)
will return a new arrays.PandasArray, which is just a thin (no-copy)
wrapper around a numpy.ndarray that satisfies the pandas extension array interface.
In [28]: pd.array([1, 2, 3])
Out[28]: 
<IntegerArray>
[1, 2, 3]
Length: 3, dtype: Int64
On their own, a PandasArray isn’t a very useful object.
But if you need write low-level code that works generically for any
ExtensionArray, PandasArray
satisfies that need.
Notice that by default, if no dtype is specified, the dtype of the returned
array is inferred from the data. In particular, note that the first example of
[1, 2, np.nan] would have returned a floating-point array, since NaN
is a float.
In [29]: pd.array([1, 2, np.nan])
Out[29]: 
<IntegerArray>
[1, 2, <NA>]
Length: 3, dtype: Int64
Storing Interval and Period data in Series and DataFrame#
Interval and Period data may now be stored in a Series or DataFrame, in addition to an
IntervalIndex and PeriodIndex like previously (GH 19453, GH 22862).
In [30]: ser = pd.Series(pd.interval_range(0, 5))
In [31]: ser
Out[31]: 
0    (0, 1]
1    (1, 2]
2    (2, 3]
3    (3, 4]
4    (4, 5]
Length: 5, dtype: interval
In [32]: ser.dtype
Out[32]: interval[int64, right]
For periods:
In [33]: pser = pd.Series(pd.period_range("2000", freq="D", periods=5))
In [34]: pser
Out[34]: 
0    2000-01-01
1    2000-01-02
2    2000-01-03
3    2000-01-04
4    2000-01-05
Length: 5, dtype: period[D]
In [35]: pser.dtype
Out[35]: period[D]
Previously, these would be cast to a NumPy array with object dtype. In general,
this should result in better performance when storing an array of intervals or periods
in a Series or column of a DataFrame.
Use Series.array to extract the underlying array of intervals or periods
from the Series:
In [36]: ser.array
Out[36]: 
<IntervalArray>
[(0, 1], (1, 2], (2, 3], (3, 4], (4, 5]]
Length: 5, dtype: interval[int64, right]
In [37]: pser.array
Out[37]: 
<PeriodArray>
['2000-01-01', '2000-01-02', '2000-01-03', '2000-01-04', '2000-01-05']
Length: 5, dtype: period[D]
These return an instance of arrays.IntervalArray or arrays.PeriodArray,
the new extension arrays that back interval and period data.
Warning
For backwards compatibility, Series.values continues to return
a NumPy array of objects for Interval and Period data. We recommend
using Series.array when you need the array of data stored in the
Series, and Series.to_numpy() when you know you need a NumPy array.
See Dtypes and Attributes and Underlying Data for more.
Joining with two multi-indexes#
DataFrame.merge() and DataFrame.join() can now be used to join multi-indexed Dataframe instances on the overlapping index levels (GH 6360)
See the Merge, join, and concatenate documentation section.
In [38]: index_left = pd.MultiIndex.from_tuples([('K0', 'X0'), ('K0', 'X1'),
   ....:                                        ('K1', 'X2')],
   ....:                                        names=['key', 'X'])
   ....: 
In [39]: left = pd.DataFrame({'A': ['A0', 'A1', 'A2'],
   ....:                      'B': ['B0', 'B1', 'B2']}, index=index_left)
   ....: 
In [40]: index_right = pd.MultiIndex.from_tuples([('K0', 'Y0'), ('K1', 'Y1'),
   ....:                                         ('K2', 'Y2'), ('K2', 'Y3')],
   ....:                                         names=['key', 'Y'])
   ....: 
In [41]: right = pd.DataFrame({'C': ['C0', 'C1', 'C2', 'C3'],
   ....:                       'D': ['D0', 'D1', 'D2', 'D3']}, index=index_right)
   ....: 
In [42]: left.join(right)
Out[42]: 
            A   B   C   D
key X  Y                 
K0  X0 Y0  A0  B0  C0  D0
    X1 Y0  A1  B1  C0  D0
K1  X2 Y1  A2  B2  C1  D1
[3 rows x 4 columns]
For earlier versions this can be done using the following.
In [43]: pd.merge(left.reset_index(), right.reset_index(),
   ....:          on=['key'], how='inner').set_index(['key', 'X', 'Y'])
   ....: 
Out[43]: 
            A   B   C   D
key X  Y                 
K0  X0 Y0  A0  B0  C0  D0
    X1 Y0  A1  B1  C0  D0
K1  X2 Y1  A2  B2  C1  D1
[3 rows x 4 columns]
Function read_html enhancements#
read_html() previously ignored colspan and rowspan attributes.
Now it understands them, treating them as sequences of cells with the same
value. (GH 17054)
In [44]: result = pd.read_html(StringIO("""
   ....:   <table>
   ....:     <thead>
   ....:       <tr>
   ....:         <th>A</th><th>B</th><th>C</th>
   ....:       </tr>
   ....:     </thead>
   ....:     <tbody>
   ....:       <tr>
   ....:         <td colspan="2">1</td><td>2</td>
   ....:       </tr>
   ....:     </tbody>
   ....:   </table>"""))
   ....: 
Previous behavior:
In [13]: result
Out [13]:
[   A  B   C
 0  1  2 NaN]
New behavior:
In [45]: result
Out[45]: 
[   A  B  C
 0  1  1  2
 
 [1 rows x 3 columns]]
New Styler.pipe() method#
The Styler class has gained a
pipe() method.  This provides a
convenient way to apply users’ predefined styling functions, and can help reduce
“boilerplate” when using DataFrame styling functionality repeatedly within a notebook. (GH 23229)
In [46]: df = pd.DataFrame({'N': [1250, 1500, 1750], 'X': [0.25, 0.35, 0.50]})
In [47]: def format_and_align(styler):
   ....:     return (styler.format({'N': '{:,}', 'X': '{:.1%}'})
   ....:                   .set_properties(**{'text-align': 'right'}))
   ....: 
In [48]: df.style.pipe(format_and_align).set_caption('Summary of results.')
Out[48]: <pandas.io.formats.style.Styler at 0x7f91684b06a0>
Similar methods already exist for other classes in pandas, including DataFrame.pipe(),
GroupBy.pipe(), and Resampler.pipe().
Renaming names in a MultiIndex#
DataFrame.rename_axis() now supports index and columns arguments
and Series.rename_axis() supports index argument (GH 19978).
This change allows a dictionary to be passed so that some of the names
of a MultiIndex can be changed.
Example:
In [49]: mi = pd.MultiIndex.from_product([list('AB'), list('CD'), list('EF')],
   ....:                                 names=['AB', 'CD', 'EF'])
   ....: 
In [50]: df = pd.DataFrame(list(range(len(mi))), index=mi, columns=['N'])
In [51]: df
Out[51]: 
          N
AB CD EF   
A  C  E   0
      F   1
   D  E   2
      F   3
B  C  E   4
      F   5
   D  E   6
      F   7
[8 rows x 1 columns]
In [52]: df.rename_axis(index={'CD': 'New'})
Out[52]: 
           N
AB New EF   
A  C   E   0
       F   1
   D   E   2
       F   3
B  C   E   4
       F   5
   D   E   6
       F   7
[8 rows x 1 columns]
See the Advanced documentation on renaming for more details.
Other enhancements#
- merge()now directly allows merge between objects of type- DataFrameand named- Series, without the need to convert the- Seriesobject into a- DataFramebeforehand (GH 21220)
- ExcelWriternow accepts- modeas a keyword argument, enabling append to existing workbooks when using the- openpyxlengine (GH 3441)
- FrozenListhas gained the- .union()and- .difference()methods. This functionality greatly simplifies groupby’s that rely on explicitly excluding certain columns. See Splitting an object into groups for more information (GH 15475, GH 15506).
- DataFrame.to_parquet()now accepts- indexas an argument, allowing the user to override the engine’s default behavior to include or omit the dataframe’s indexes from the resulting Parquet file. (GH 20768)
- read_feather()now accepts- columnsas an argument, allowing the user to specify which columns should be read. (GH 24025)
- DataFrame.corr()and- Series.corr()now accept a callable for generic calculation methods of correlation, e.g. histogram intersection (GH 22684)
- DataFrame.to_string()now accepts- decimalas an argument, allowing the user to specify which decimal separator should be used in the output. (GH 23614)
- DataFrame.to_html()now accepts- render_linksas an argument, allowing the user to generate HTML with links to any URLs that appear in the DataFrame. See the section on writing HTML in the IO docs for example usage. (GH 2679)
- pandas.read_csv()now supports pandas extension types as an argument to- dtype, allowing the user to use pandas extension types when reading CSVs. (GH 23228)
- The - shift()method now accepts- fill_valueas an argument, allowing the user to specify a value which will be used instead of NA/NaT in the empty periods. (GH 15486)
- to_datetime()now supports the- %Zand- %zdirective when passed into- format(GH 13486)
- Series.mode()and- DataFrame.mode()now support the- dropnaparameter which can be used to specify whether- NaN/- NaTvalues should be considered (GH 17534)
- DataFrame.to_csv()and- Series.to_csv()now support the- compressionkeyword when a file handle is passed. (GH 21227)
- Index.droplevel()is now implemented also for flat indexes, for compatibility with- MultiIndex(GH 21115)
- Series.droplevel()and- DataFrame.droplevel()are now implemented (GH 20342)
- Added support for reading from/writing to Google Cloud Storage via the - gcsfslibrary (GH 19454, GH 23094)
- DataFrame.to_gbq()and- read_gbq()signature and documentation updated to reflect changes from the pandas-gbq library version 0.8.0. Adds a- credentialsargument, which enables the use of any kind of google-auth credentials. (GH 21627, GH 22557, GH 23662)
- New method - HDFStore.walk()will recursively walk the group hierarchy of an HDF5 file (GH 10932)
- read_html()copies cell data across- colspanand- rowspan, and it treats all-- thtable rows as headers if- headerkwarg is not given and there is no- thead(GH 17054)
- Series.nlargest(),- Series.nsmallest(),- DataFrame.nlargest(), and- DataFrame.nsmallest()now accept the value- "all"for the- keepargument. This keeps all ties for the nth largest/smallest value (GH 16818)
- IntervalIndexhas gained the- set_closed()method to change the existing- closedvalue (GH 21670)
- to_csv(),- to_csv(),- to_json(), and- to_json()now support- compression='infer'to infer compression based on filename extension (GH 15008). The default compression for- to_csv,- to_json, and- to_picklemethods has been updated to- 'infer'(GH 22004).
- DataFrame.to_sql()now supports writing- TIMESTAMP WITH TIME ZONEtypes for supported databases. For databases that don’t support timezones, datetime data will be stored as timezone unaware local timestamps. See the Datetime data types for implications (GH 9086).
- to_timedelta()now supports iso-formatted timedelta strings (GH 21877)
- Seriesand- DataFramenow support- Iterableobjects in the constructor (GH 2193)
- DatetimeIndexhas gained the- DatetimeIndex.timetzattribute. This returns the local time with timezone information. (GH 21358)
- round(),- ceil(), and- floor()for- DatetimeIndexand- Timestampnow support an- ambiguousargument for handling datetimes that are rounded to ambiguous times (GH 18946) and a- nonexistentargument for handling datetimes that are rounded to nonexistent times. See Nonexistent times when localizing (GH 22647)
- The result of - resample()is now iterable similar to- groupby()(GH 15314).
- Series.resample()and- DataFrame.resample()have gained the- pandas.core.resample.Resampler.quantile()(GH 15023).
- DataFrame.resample()and- Series.resample()with a- PeriodIndexwill now respect the- baseargument in the same fashion as with a- DatetimeIndex. (GH 23882)
- pandas.api.types.is_list_like()has gained a keyword- allow_setswhich is- Trueby default; if- False, all instances of- setwill not be considered “list-like” anymore (GH 23061)
- Index.to_frame()now supports overriding column name(s) (GH 22580).
- Categorical.from_codes()now can take a- dtypeparameter as an alternative to passing- categoriesand- ordered(GH 24398).
- New attribute - __git_version__will return git commit sha of current build (GH 21295).
- Compatibility with Matplotlib 3.0 (GH 22790). 
- Added - Interval.overlaps(),- arrays.IntervalArray.overlaps(), and- IntervalIndex.overlaps()for determining overlaps between interval-like objects (GH 21998)
- read_fwf()now accepts keyword- infer_nrows(GH 15138).
- to_parquet()now supports writing a- DataFrameas a directory of parquet files partitioned by a subset of the columns when- engine = 'pyarrow'(GH 23283)
- Timestamp.tz_localize(),- DatetimeIndex.tz_localize(), and- Series.tz_localize()have gained the- nonexistentargument for alternative handling of nonexistent times. See Nonexistent times when localizing (GH 8917, GH 24466)
- Index.difference(),- Index.intersection(),- Index.union(), and- Index.symmetric_difference()now have an optional- sortparameter to control whether the results should be sorted if possible (GH 17839, GH 24471)
- read_excel()now accepts- usecolsas a list of column names or callable (GH 18273)
- MultiIndex.to_flat_index()has been added to flatten multiple levels into a single-level- Indexobject.
- DataFrame.to_stata()and- pandas.io.stata.StataWriter117can write mixed string columns to Stata strl format (GH 23633)
- DataFrame.between_time()and- DataFrame.at_time()have gained the- axisparameter (GH 8839)
- DataFrame.to_records()now accepts- index_dtypesand- column_dtypesparameters to allow different data types in stored column and index records (GH 18146)
- IntervalIndexhas gained the- is_overlappingattribute to indicate if the- IntervalIndexcontains any overlapping intervals (GH 23309)
- pandas.DataFrame.to_sql()has gained the- methodargument to control SQL insertion clause. See the insertion method section in the documentation. (GH 8953)
- DataFrame.corrwith()now supports Spearman’s rank correlation, Kendall’s tau as well as callable correlation methods. (GH 21925)
- DataFrame.to_json(),- DataFrame.to_csv(),- DataFrame.to_pickle(), and other export methods now support tilde(~) in path argument. (GH 23473)
Backwards incompatible API changes#
pandas 0.24.0 includes a number of API breaking changes.
Increased minimum versions for dependencies#
We have updated our minimum supported versions of dependencies (GH 21242, GH 18742, GH 23774, GH 24767). If installed, we now require:
| Package | Minimum Version | Required | 
|---|---|---|
| numpy | 1.12.0 | X | 
| bottleneck | 1.2.0 | |
| fastparquet | 0.2.1 | |
| matplotlib | 2.0.0 | |
| numexpr | 2.6.1 | |
| pandas-gbq | 0.8.0 | |
| pyarrow | 0.9.0 | |
| pytables | 3.4.2 | |
| scipy | 0.18.1 | |
| xlrd | 1.0.0 | |
| pytest (dev) | 3.6 | 
Additionally we no longer depend on feather-format for feather based storage
and replaced it with references to pyarrow (GH 21639 and GH 23053).
os.linesep is used for line_terminator of DataFrame.to_csv#
DataFrame.to_csv() now uses os.linesep() rather than '\n'
for the default line terminator (GH 20353).
This change only affects when running on Windows, where '\r\n' was used for line terminator
even when '\n' was passed in line_terminator.
Previous behavior on Windows:
In [1]: data = pd.DataFrame({"string_with_lf": ["a\nbc"],
   ...:                      "string_with_crlf": ["a\r\nbc"]})
In [2]: # When passing file PATH to to_csv,
   ...: # line_terminator does not work, and csv is saved with '\r\n'.
   ...: # Also, this converts all '\n's in the data to '\r\n'.
   ...: data.to_csv("test.csv", index=False, line_terminator='\n')
In [3]: with open("test.csv", mode='rb') as f:
   ...:     print(f.read())
Out[3]: b'string_with_lf,string_with_crlf\r\n"a\r\nbc","a\r\r\nbc"\r\n'
In [4]: # When passing file OBJECT with newline option to
   ...: # to_csv, line_terminator works.
   ...: with open("test2.csv", mode='w', newline='\n') as f:
   ...:     data.to_csv(f, index=False, line_terminator='\n')
In [5]: with open("test2.csv", mode='rb') as f:
   ...:     print(f.read())
Out[5]: b'string_with_lf,string_with_crlf\n"a\nbc","a\r\nbc"\n'
New behavior on Windows:
Passing line_terminator explicitly, set the line terminator to that character.
In [1]: data = pd.DataFrame({"string_with_lf": ["a\nbc"],
   ...:                      "string_with_crlf": ["a\r\nbc"]})
In [2]: data.to_csv("test.csv", index=False, line_terminator='\n')
In [3]: with open("test.csv", mode='rb') as f:
   ...:     print(f.read())
Out[3]: b'string_with_lf,string_with_crlf\n"a\nbc","a\r\nbc"\n'
On Windows, the value of os.linesep is '\r\n', so if line_terminator is not
set, '\r\n' is used for line terminator.
In [1]: data = pd.DataFrame({"string_with_lf": ["a\nbc"],
   ...:                      "string_with_crlf": ["a\r\nbc"]})
In [2]: data.to_csv("test.csv", index=False)
In [3]: with open("test.csv", mode='rb') as f:
   ...:     print(f.read())
Out[3]: b'string_with_lf,string_with_crlf\r\n"a\nbc","a\r\nbc"\r\n'
For file objects, specifying newline is not sufficient to set the line terminator.
You must pass in the line_terminator explicitly, even in this case.
In [1]: data = pd.DataFrame({"string_with_lf": ["a\nbc"],
   ...:                      "string_with_crlf": ["a\r\nbc"]})
In [2]: with open("test2.csv", mode='w', newline='\n') as f:
   ...:     data.to_csv(f, index=False)
In [3]: with open("test2.csv", mode='rb') as f:
   ...:     print(f.read())
Out[3]: b'string_with_lf,string_with_crlf\r\n"a\nbc","a\r\nbc"\r\n'
Proper handling of np.nan in a string data-typed column with the Python engine#
There was bug in read_excel() and read_csv() with the Python
engine, where missing values turned to 'nan' with dtype=str and
na_filter=True. Now, these missing values are converted to the string
missing indicator, np.nan. (GH 20377)
Previous behavior:
In [5]: data = 'a,b,c\n1,,3\n4,5,6'
In [6]: df = pd.read_csv(StringIO(data), engine='python', dtype=str, na_filter=True)
In [7]: df.loc[0, 'b']
Out[7]:
'nan'
New behavior:
In [53]: data = 'a,b,c\n1,,3\n4,5,6'
In [54]: df = pd.read_csv(StringIO(data), engine='python', dtype=str, na_filter=True)
In [55]: df.loc[0, 'b']
Out[55]: nan
Notice how we now instead output np.nan itself instead of a stringified form of it.
Parsing datetime strings with timezone offsets#
Previously, parsing datetime strings with UTC offsets with to_datetime()
or DatetimeIndex would automatically convert the datetime to UTC
without timezone localization. This is inconsistent from parsing the same
datetime string with Timestamp which would preserve the UTC
offset in the tz attribute. Now, to_datetime() preserves the UTC
offset in the tz attribute when all the datetime strings have the same
UTC offset (GH 17697, GH 11736, GH 22457)
Previous behavior:
In [2]: pd.to_datetime("2015-11-18 15:30:00+05:30")
Out[2]: Timestamp('2015-11-18 10:00:00')
In [3]: pd.Timestamp("2015-11-18 15:30:00+05:30")
Out[3]: Timestamp('2015-11-18 15:30:00+0530', tz='pytz.FixedOffset(330)')
# Different UTC offsets would automatically convert the datetimes to UTC (without a UTC timezone)
In [4]: pd.to_datetime(["2015-11-18 15:30:00+05:30", "2015-11-18 16:30:00+06:30"])
Out[4]: DatetimeIndex(['2015-11-18 10:00:00', '2015-11-18 10:00:00'], dtype='datetime64[ns]', freq=None)
New behavior:
In [56]: pd.to_datetime("2015-11-18 15:30:00+05:30")
Out[56]: Timestamp('2015-11-18 15:30:00+0530', tz='UTC+05:30')
In [57]: pd.Timestamp("2015-11-18 15:30:00+05:30")
Out[57]: Timestamp('2015-11-18 15:30:00+0530', tz='UTC+05:30')
Parsing datetime strings with the same UTC offset will preserve the UTC offset in the tz
In [58]: pd.to_datetime(["2015-11-18 15:30:00+05:30"] * 2)
Out[58]: DatetimeIndex(['2015-11-18 15:30:00+05:30', '2015-11-18 15:30:00+05:30'], dtype='datetime64[ns, UTC+05:30]', freq=None)
Parsing datetime strings with different UTC offsets will now create an Index of
datetime.datetime objects with different UTC offsets
In [59]: idx = pd.to_datetime(["2015-11-18 15:30:00+05:30",
                               "2015-11-18 16:30:00+06:30"])
In[60]: idx
Out[60]: Index([2015-11-18 15:30:00+05:30, 2015-11-18 16:30:00+06:30], dtype='object')
In[61]: idx[0]
Out[61]: Timestamp('2015-11-18 15:30:00+0530', tz='UTC+05:30')
In[62]: idx[1]
Out[62]: Timestamp('2015-11-18 16:30:00+0630', tz='UTC+06:30')
Passing utc=True will mimic the previous behavior but will correctly indicate
that the dates have been converted to UTC
In [59]: pd.to_datetime(["2015-11-18 15:30:00+05:30",
   ....:                 "2015-11-18 16:30:00+06:30"], utc=True)
   ....: 
Out[59]: DatetimeIndex(['2015-11-18 10:00:00+00:00', '2015-11-18 10:00:00+00:00'], dtype='datetime64[ns, UTC]', freq=None)
Parsing mixed-timezones with read_csv()#
read_csv() no longer silently converts mixed-timezone columns to UTC (GH 24987).
Previous behavior
>>> import io
>>> content = """\
... a
... 2000-01-01T00:00:00+05:00
... 2000-01-01T00:00:00+06:00"""
>>> df = pd.read_csv(io.StringIO(content), parse_dates=['a'])
>>> df.a
0   1999-12-31 19:00:00
1   1999-12-31 18:00:00
Name: a, dtype: datetime64[ns]
New behavior
In[64]: import io
In[65]: content = """\
   ...: a
   ...: 2000-01-01T00:00:00+05:00
   ...: 2000-01-01T00:00:00+06:00"""
In[66]: df = pd.read_csv(io.StringIO(content), parse_dates=['a'])
In[67]: df.a
Out[67]:
0   2000-01-01 00:00:00+05:00
1   2000-01-01 00:00:00+06:00
Name: a, Length: 2, dtype: object
As can be seen, the dtype is object; each value in the column is a string.
To convert the strings to an array of datetimes, the date_parser argument
In [3]: df = pd.read_csv(
   ...:     io.StringIO(content),
   ...:     parse_dates=['a'],
   ...:     date_parser=lambda col: pd.to_datetime(col, utc=True),
   ...: )
In [4]: df.a
Out[4]:
0   1999-12-31 19:00:00+00:00
1   1999-12-31 18:00:00+00:00
Name: a, dtype: datetime64[ns, UTC]
See Parsing datetime strings with timezone offsets for more.
Time values in dt.end_time and to_timestamp(how='end')#
The time values in Period and PeriodIndex objects are now set
to ‘23:59:59.999999999’ when calling Series.dt.end_time, Period.end_time,
PeriodIndex.end_time, Period.to_timestamp() with how='end',
or PeriodIndex.to_timestamp() with how='end' (GH 17157)
Previous behavior:
In [2]: p = pd.Period('2017-01-01', 'D')
In [3]: pi = pd.PeriodIndex([p])
In [4]: pd.Series(pi).dt.end_time[0]
Out[4]: Timestamp(2017-01-01 00:00:00)
In [5]: p.end_time
Out[5]: Timestamp(2017-01-01 23:59:59.999999999)
New behavior:
Calling Series.dt.end_time will now result in a time of ‘23:59:59.999999999’ as
is the case with Period.end_time, for example
In [60]: p = pd.Period('2017-01-01', 'D')
In [61]: pi = pd.PeriodIndex([p])
In [62]: pd.Series(pi).dt.end_time[0]
Out[62]: Timestamp('2017-01-01 23:59:59.999999999')
In [63]: p.end_time
Out[63]: Timestamp('2017-01-01 23:59:59.999999999')
Series.unique for timezone-aware data#
The return type of Series.unique() for datetime with timezone values has changed
from an numpy.ndarray of Timestamp objects to a arrays.DatetimeArray (GH 24024).
In [64]: ser = pd.Series([pd.Timestamp('2000', tz='UTC'),
   ....:                  pd.Timestamp('2000', tz='UTC')])
   ....: 
Previous behavior:
In [3]: ser.unique()
Out[3]: array([Timestamp('2000-01-01 00:00:00+0000', tz='UTC')], dtype=object)
New behavior:
In [65]: ser.unique()
Out[65]: 
<DatetimeArray>
['2000-01-01 00:00:00+00:00']
Length: 1, dtype: datetime64[ns, UTC]
Sparse data structure refactor#
SparseArray, the array backing SparseSeries and the columns in a SparseDataFrame,
is now an extension array (GH 21978, GH 19056, GH 22835).
To conform to this interface and for consistency with the rest of pandas, some API breaking
changes were made:
- SparseArrayis no longer a subclass of- numpy.ndarray. To convert a- SparseArrayto a NumPy array, use- numpy.asarray().
- SparseArray.dtypeand- SparseSeries.dtypeare now instances of- SparseDtype, rather than- np.dtype. Access the underlying dtype with- SparseDtype.subtype.
- numpy.asarray(sparse_array)now returns a dense array with all the values, not just the non-fill-value values (GH 14167)
- SparseArray.takenow matches the API of- pandas.api.extensions.ExtensionArray.take()(GH 19506):- The default value of - allow_fillhas changed from- Falseto- True.
- The - outand- modeparameters are now longer accepted (previously, this raised if they were specified).
- Passing a scalar for - indicesis no longer allowed.
 
- The result of - concat()with a mix of sparse and dense Series is a Series with sparse values, rather than a- SparseSeries.
- SparseDataFrame.combineand- DataFrame.combine_firstno longer supports combining a sparse column with a dense column while preserving the sparse subtype. The result will be an object-dtype SparseArray.
- Setting - SparseArray.fill_valueto a fill value with a different dtype is now allowed.
- DataFrame[column]is now a- Serieswith sparse values, rather than a- SparseSeries, when slicing a single column with sparse values (GH 23559).
- The result of - Series.where()is now a- Serieswith sparse values, like with other extension arrays (GH 24077)
Some new warnings are issued for operations that require or are likely to materialize a large dense array:
- A - errors.PerformanceWarningis issued when using fillna with a- method, as a dense array is constructed to create the filled array. Filling with a- valueis the efficient way to fill a sparse array.
- A - errors.PerformanceWarningis now issued when concatenating sparse Series with differing fill values. The fill value from the first sparse array continues to be used.
In addition to these API breaking changes, many Performance Improvements and Bug Fixes have been made.
Finally, a Series.sparse accessor was added to provide sparse-specific methods like Series.sparse.from_coo().
In [66]: s = pd.Series([0, 0, 1, 1, 1], dtype='Sparse[int]')
In [67]: s.sparse.density
Out[67]: 0.6
get_dummies() always returns a DataFrame#
Previously, when sparse=True was passed to get_dummies(), the return value could be either
a DataFrame or a SparseDataFrame, depending on whether all or a just a subset
of the columns were dummy-encoded. Now, a DataFrame is always returned (GH 24284).
Previous behavior
The first get_dummies() returns a DataFrame because the column A
is not dummy encoded. When just ["B", "C"] are passed to get_dummies,
then all the columns are dummy-encoded, and a SparseDataFrame was returned.
In [2]: df = pd.DataFrame({"A": [1, 2], "B": ['a', 'b'], "C": ['a', 'a']})
In [3]: type(pd.get_dummies(df, sparse=True))
Out[3]: pandas.core.frame.DataFrame
In [4]: type(pd.get_dummies(df[['B', 'C']], sparse=True))
Out[4]: pandas.core.sparse.frame.SparseDataFrame
New behavior
Now, the return type is consistently a DataFrame.
In [68]: type(pd.get_dummies(df, sparse=True))
Out[68]: pandas.core.frame.DataFrame
In [69]: type(pd.get_dummies(df[['B', 'C']], sparse=True))
Out[69]: pandas.core.frame.DataFrame
Note
There’s no difference in memory usage between a SparseDataFrame
and a DataFrame with sparse values. The memory usage will
be the same as in the previous version of pandas.
Raise ValueError in DataFrame.to_dict(orient='index')#
Bug in DataFrame.to_dict() raises ValueError when used with
orient='index' and a non-unique index instead of losing data (GH 22801)
In [70]: df = pd.DataFrame({'a': [1, 2], 'b': [0.5, 0.75]}, index=['A', 'A'])
In [71]: df
Out[71]: 
   a     b
A  1  0.50
A  2  0.75
[2 rows x 2 columns]
In [72]: df.to_dict(orient='index')
---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
Cell In[72], line 1
----> 1 df.to_dict(orient='index')
File ~/work/pandas/pandas/pandas/core/frame.py:2039, in DataFrame.to_dict(self, orient, into, index)
   1937 """
   1938 Convert the DataFrame to a dictionary.
   1939 
   (...)
   2035  defaultdict(<class 'list'>, {'col1': 2, 'col2': 0.75})]
   2036 """
   2037 from pandas.core.methods.to_dict import to_dict
-> 2039 return to_dict(self, orient, into, index)
File ~/work/pandas/pandas/pandas/core/methods/to_dict.py:181, in to_dict(df, orient, into, index)
    179 elif orient == "index":
    180     if not df.index.is_unique:
--> 181         raise ValueError("DataFrame index must be unique for orient='index'.")
    182     columns = df.columns.tolist()
    183     if are_all_object_dtype_cols:
ValueError: DataFrame index must be unique for orient='index'.
Tick DateOffset normalize restrictions#
Creating a Tick object (Day, Hour, Minute,
Second, Milli, Micro, Nano) with
normalize=True is no longer supported.  This prevents unexpected behavior
where addition could fail to be monotone or associative.  (GH 21427)
Previous behavior:
In [2]: ts = pd.Timestamp('2018-06-11 18:01:14')
In [3]: ts
Out[3]: Timestamp('2018-06-11 18:01:14')
In [4]: tic = pd.offsets.Hour(n=2, normalize=True)
   ...:
In [5]: tic
Out[5]: <2 * Hours>
In [6]: ts + tic
Out[6]: Timestamp('2018-06-11 00:00:00')
In [7]: ts + tic + tic + tic == ts + (tic + tic + tic)
Out[7]: False
New behavior:
In [73]: ts = pd.Timestamp('2018-06-11 18:01:14')
In [74]: tic = pd.offsets.Hour(n=2)
In [75]: ts + tic + tic + tic == ts + (tic + tic + tic)
Out[75]: True
Period subtraction#
Subtraction of a Period from another Period will give a DateOffset.
instead of an integer (GH 21314)
Previous behavior:
In [2]: june = pd.Period('June 2018')
In [3]: april = pd.Period('April 2018')
In [4]: june - april
Out [4]: 2
New behavior:
In [76]: june = pd.Period('June 2018')
In [77]: april = pd.Period('April 2018')
In [78]: june - april
Out[78]: <2 * MonthEnds>
Similarly, subtraction of a Period from a PeriodIndex will now return
an Index of DateOffset objects instead of an Int64Index
Previous behavior:
In [2]: pi = pd.period_range('June 2018', freq='M', periods=3)
In [3]: pi - pi[0]
Out[3]: Int64Index([0, 1, 2], dtype='int64')
New behavior:
In [79]: pi = pd.period_range('June 2018', freq='M', periods=3)
In [80]: pi - pi[0]
Out[80]: Index([<0 * MonthEnds>, <MonthEnd>, <2 * MonthEnds>], dtype='object')
Addition/subtraction of NaN from DataFrame#
Adding or subtracting NaN from a DataFrame column with
timedelta64[ns] dtype will now raise a TypeError instead of returning
all-NaT.  This is for compatibility with TimedeltaIndex and
Series behavior (GH 22163)
In [81]: df = pd.DataFrame([pd.Timedelta(days=1)])
In [82]: df
Out[82]: 
       0
0 1 days
[1 rows x 1 columns]
Previous behavior:
In [4]: df = pd.DataFrame([pd.Timedelta(days=1)])
In [5]: df - np.nan
Out[5]:
    0
0 NaT
New behavior:
In [2]: df - np.nan
...
TypeError: unsupported operand type(s) for -: 'TimedeltaIndex' and 'float'
DataFrame comparison operations broadcasting changes#
Previously, the broadcasting behavior of DataFrame comparison
operations (==, !=, …) was inconsistent with the behavior of
arithmetic operations (+, -, …).  The behavior of the comparison
operations has been changed to match the arithmetic operations in these cases.
(GH 22880)
The affected cases are:
- operating against a 2-dimensional - np.ndarraywith either 1 row or 1 column will now broadcast the same way a- np.ndarraywould (GH 23000).
- a list or tuple with length matching the number of rows in the - DataFramewill now raise- ValueErrorinstead of operating column-by-column (GH 22880.
- a list or tuple with length matching the number of columns in the - DataFramewill now operate row-by-row instead of raising- ValueError(GH 22880).
In [83]: arr = np.arange(6).reshape(3, 2)
In [84]: df = pd.DataFrame(arr)
In [85]: df
Out[85]: 
   0  1
0  0  1
1  2  3
2  4  5
[3 rows x 2 columns]
Previous behavior:
In [5]: df == arr[[0], :]
    ...: # comparison previously broadcast where arithmetic would raise
Out[5]:
       0      1
0   True   True
1  False  False
2  False  False
In [6]: df + arr[[0], :]
...
ValueError: Unable to coerce to DataFrame, shape must be (3, 2): given (1, 2)
In [7]: df == (1, 2)
    ...: # length matches number of columns;
    ...: # comparison previously raised where arithmetic would broadcast
...
ValueError: Invalid broadcasting comparison [(1, 2)] with block values
In [8]: df + (1, 2)
Out[8]:
   0  1
0  1  3
1  3  5
2  5  7
In [9]: df == (1, 2, 3)
    ...:  # length matches number of rows
    ...:  # comparison previously broadcast where arithmetic would raise
Out[9]:
       0      1
0  False   True
1   True  False
2  False  False
In [10]: df + (1, 2, 3)
...
ValueError: Unable to coerce to Series, length must be 2: given 3
New behavior:
# Comparison operations and arithmetic operations both broadcast.
In [86]: df == arr[[0], :]
Out[86]: 
       0      1
0   True   True
1  False  False
2  False  False
[3 rows x 2 columns]
In [87]: df + arr[[0], :]
Out[87]: 
   0  1
0  0  2
1  2  4
2  4  6
[3 rows x 2 columns]
# Comparison operations and arithmetic operations both broadcast.
In [88]: df == (1, 2)
Out[88]: 
       0      1
0  False  False
1  False  False
2  False  False
[3 rows x 2 columns]
In [89]: df + (1, 2)
Out[89]: 
   0  1
0  1  3
1  3  5
2  5  7
[3 rows x 2 columns]
# Comparison operations and arithmetic operations both raise ValueError.
In [6]: df == (1, 2, 3)
...
ValueError: Unable to coerce to Series, length must be 2: given 3
In [7]: df + (1, 2, 3)
...
ValueError: Unable to coerce to Series, length must be 2: given 3
DataFrame arithmetic operations broadcasting changes#
DataFrame arithmetic operations when operating with 2-dimensional
np.ndarray objects now broadcast in the same way as np.ndarray
broadcast.  (GH 23000)
In [90]: arr = np.arange(6).reshape(3, 2)
In [91]: df = pd.DataFrame(arr)
In [92]: df
Out[92]: 
   0  1
0  0  1
1  2  3
2  4  5
[3 rows x 2 columns]
Previous behavior:
In [5]: df + arr[[0], :]   # 1 row, 2 columns
...
ValueError: Unable to coerce to DataFrame, shape must be (3, 2): given (1, 2)
In [6]: df + arr[:, [1]]   # 1 column, 3 rows
...
ValueError: Unable to coerce to DataFrame, shape must be (3, 2): given (3, 1)
New behavior:
In [93]: df + arr[[0], :]   # 1 row, 2 columns
Out[93]: 
   0  1
0  0  2
1  2  4
2  4  6
[3 rows x 2 columns]
In [94]: df + arr[:, [1]]   # 1 column, 3 rows
Out[94]: 
   0   1
0  1   2
1  5   6
2  9  10
[3 rows x 2 columns]
Series and Index data-dtype incompatibilities#
Series and Index constructors now raise when the
data is incompatible with a passed dtype= (GH 15832)
Previous behavior:
In [4]: pd.Series([-1], dtype="uint64")
Out [4]:
0    18446744073709551615
dtype: uint64
New behavior:
In [4]: pd.Series([-1], dtype="uint64")
Out [4]:
...
OverflowError: Trying to coerce negative values to unsigned integers
Concatenation changes#
Calling pandas.concat() on a Categorical of ints with NA values now
causes them to be processed as objects when concatenating with anything
other than another Categorical of ints (GH 19214)
In [95]: s = pd.Series([0, 1, np.nan])
In [96]: c = pd.Series([0, 1, np.nan], dtype="category")
Previous behavior
In [3]: pd.concat([s, c])
Out[3]:
0    0.0
1    1.0
2    NaN
0    0.0
1    1.0
2    NaN
dtype: float64
New behavior
In [97]: pd.concat([s, c])
Out[97]: 
0    0.0
1    1.0
2    NaN
0    0.0
1    1.0
2    NaN
Length: 6, dtype: float64
Datetimelike API changes#
- For - DatetimeIndexand- TimedeltaIndexwith non-- None- freqattribute, addition or subtraction of integer-dtyped array or- Indexwill return an object of the same class (GH 19959)
- DateOffsetobjects are now immutable. Attempting to alter one of these will now raise- AttributeError(GH 21341)
- PeriodIndexsubtraction of another- PeriodIndexwill now return an object-dtype- Indexof- DateOffsetobjects instead of raising a- TypeError(GH 20049)
- cut()and- qcut()now returns a- DatetimeIndexor- TimedeltaIndexbins when the input is datetime or timedelta dtype respectively and- retbins=True(GH 19891)
- DatetimeIndex.to_period()and- Timestamp.to_period()will issue a warning when timezone information will be lost (GH 21333)
- PeriodIndex.tz_convert()and- PeriodIndex.tz_localize()have been removed (GH 21781)
Other API changes#
- A newly constructed empty - DataFramewith integer as the- dtypewill now only be cast to- float64if- indexis specified (GH 22858)
- Series.str.cat()will now raise if- othersis a- set(GH 23009)
- Passing scalar values to - DatetimeIndexor- TimedeltaIndexwill now raise- TypeErrorinstead of- ValueError(GH 23539)
- max_rowsand- max_colsparameters removed from- HTMLFormattersince truncation is handled by- DataFrameFormatter(GH 23818)
- read_csv()will now raise a- ValueErrorif a column with missing values is declared as having dtype- bool(GH 20591)
- The column order of the resultant - DataFramefrom- MultiIndex.to_frame()is now guaranteed to match the- MultiIndex.namesorder. (GH 22420)
- Incorrectly passing a - DatetimeIndexto- MultiIndex.from_tuples(), rather than a sequence of tuples, now raises a- TypeErrorrather than a- ValueError(GH 24024)
- pd.offsets.generate_range()argument- time_rulehas been removed; use- offsetinstead (GH 24157)
- In 0.23.x, pandas would raise a - ValueErroron a merge of a numeric column (e.g.- intdtyped column) and an- objectdtyped column (GH 9780). We have re-enabled the ability to merge- objectand other dtypes; pandas will still raise on a merge between a numeric and an- objectdtyped column that is composed only of strings (GH 21681)
- Accessing a level of a - MultiIndexwith a duplicate name (e.g. in- get_level_values()) now raises a- ValueErrorinstead of a- KeyError(GH 21678).
- Invalid construction of - IntervalDtypewill now always raise a- TypeErrorrather than a- ValueErrorif the subdtype is invalid (GH 21185)
- Trying to reindex a - DataFramewith a non unique- MultiIndexnow raises a- ValueErrorinstead of an- Exception(GH 21770)
- Indexsubtraction will attempt to operate element-wise instead of raising- TypeError(GH 19369)
- pandas.io.formats.style.Stylersupports a- number-formatproperty when using- to_excel()(GH 22015)
- DataFrame.corr()and- Series.corr()now raise a- ValueErroralong with a helpful error message instead of a- KeyErrorwhen supplied with an invalid method (GH 22298)
- shift()will now always return a copy, instead of the previous behaviour of returning self when shifting by 0 (GH 22397)
- DataFrame.set_index()now gives a better (and less frequent) KeyError, raises a- ValueErrorfor incorrect types, and will not fail on duplicate column names with- drop=True. (GH 22484)
- Slicing a single row of a DataFrame with multiple ExtensionArrays of the same type now preserves the dtype, rather than coercing to object (GH 22784) 
- DateOffsetattribute- _cacheableand method- _should_cachehave been removed (GH 23118)
- Series.searchsorted(), when supplied a scalar value to search for, now returns a scalar instead of an array (GH 23801).
- Categorical.searchsorted(), when supplied a scalar value to search for, now returns a scalar instead of an array (GH 23466).
- Categorical.searchsorted()now raises a- KeyErrorrather that a- ValueError, if a searched for key is not found in its categories (GH 23466).
- Index.hasnans()and- Series.hasnans()now always return a python boolean. Previously, a python or a numpy boolean could be returned, depending on circumstances (GH 23294).
- The order of the arguments of - DataFrame.to_html()and- DataFrame.to_string()is rearranged to be consistent with each other. (GH 23614)
- CategoricalIndex.reindex()now raises a- ValueErrorif the target index is non-unique and not equal to the current index. It previously only raised if the target index was not of a categorical dtype (GH 23963).
- Series.to_list()and- Index.to_list()are now aliases of- Series.tolistrespectively- Index.tolist(GH 8826)
- The result of - SparseSeries.unstackis now a- DataFramewith sparse values, rather than a- SparseDataFrame(GH 24372).
- DatetimeIndexand- TimedeltaIndexno longer ignore the dtype precision. Passing a non-nanosecond resolution dtype will raise a- ValueError(GH 24753)
Extension type changes#
Equality and hashability
pandas now requires that extension dtypes be hashable (i.e. the respective
ExtensionDtype objects; hashability is not a requirement for the values
of the corresponding ExtensionArray). The base class implements
a default __eq__ and __hash__. If you have a parametrized dtype, you should
update the ExtensionDtype._metadata tuple to match the signature of your
__init__ method. See pandas.api.extensions.ExtensionDtype for more (GH 22476).
New and changed methods
- dropna()has been added (GH 21185)
- repeat()has been added (GH 24349)
- The - ExtensionArrayconstructor,- _from_sequencenow take the keyword arg- copy=False(GH 21185)
- pandas.api.extensions.ExtensionArray.shift()added as part of the basic- ExtensionArrayinterface (GH 22387).
- searchsorted()has been added (GH 24350)
- Support for reduction operations such as - sum,- meanvia opt-in base class method override (GH 22762)
- ExtensionArray.isna()is allowed to return an- ExtensionArray(GH 22325).
Dtype changes
- ExtensionDtypehas gained the ability to instantiate from string dtypes, e.g.- decimalwould instantiate a registered- DecimalDtype; furthermore the- ExtensionDtypehas gained the method- construct_array_type(GH 21185)
- Added - ExtensionDtype._is_numericfor controlling whether an extension dtype is considered numeric (GH 22290).
- Added - pandas.api.types.register_extension_dtype()to register an extension type with pandas (GH 22664)
- Updated the - .typeattribute for- PeriodDtype,- DatetimeTZDtype, and- IntervalDtypeto be instances of the dtype (- Period,- Timestamp, and- Intervalrespectively) (GH 22938)
Operator support
A Series based on an ExtensionArray now supports arithmetic and comparison
operators (GH 19577). There are two approaches for providing operator support for an ExtensionArray:
- Define each of the operators on your - ExtensionArraysubclass.
- Use an operator implementation from pandas that depends on operators that are already defined on the underlying elements (scalars) of the - ExtensionArray.
See the ExtensionArray Operator Support documentation section for details on both ways of adding operator support.
Other changes
- A default repr for - pandas.api.extensions.ExtensionArrayis now provided (GH 23601).
- ExtensionArray._formatting_values()is deprecated. Use- ExtensionArray._formatterinstead. (GH 23601)
- An - ExtensionArraywith a boolean dtype now works correctly as a boolean indexer.- pandas.api.types.is_bool_dtype()now properly considers them boolean (GH 22326)
Bug fixes
- Bug in - Series.get()for- Seriesusing- ExtensionArrayand integer index (GH 21257)
- Series.combine()works correctly with- ExtensionArrayinside of- Series(GH 20825)
- Series.combine()with scalar argument now works for any function type (GH 21248)
- Series.astype()and- DataFrame.astype()now dispatch to- ExtensionArray.astype()(GH 21185).
- Slicing a single row of a - DataFramewith multiple ExtensionArrays of the same type now preserves the dtype, rather than coercing to object (GH 22784)
- Bug when concatenating multiple - Serieswith different extension dtypes not casting to object dtype (GH 22994)
- Series backed by an - ExtensionArraynow work with- util.hash_pandas_object()(GH 23066)
- DataFrame.stack()no longer converts to object dtype for DataFrames where each column has the same extension dtype. The output Series will have the same dtype as the columns (GH 23077).
- Series.unstack()and- DataFrame.unstack()no longer convert extension arrays to object-dtype ndarrays. Each column in the output- DataFramewill now have the same dtype as the input (GH 23077).
- Bug when grouping - Dataframe.groupby()and aggregating on- ExtensionArrayit was not returning the actual- ExtensionArraydtype (GH 23227).
- Bug in - pandas.merge()when merging on an extension array-backed column (GH 23020).
Deprecations#
- MultiIndex.labelshas been deprecated and replaced by- MultiIndex.codes. The functionality is unchanged. The new name better reflects the natures of these codes and makes the- MultiIndexAPI more similar to the API for- CategoricalIndex(GH 13443). As a consequence, other uses of the name- labelsin- MultiIndexhave also been deprecated and replaced with- codes:- You should initialize a - MultiIndexinstance using a parameter named- codesrather than- labels.
- MultiIndex.set_labelshas been deprecated in favor of- MultiIndex.set_codes().
- For method - MultiIndex.copy(), the- labelsparameter has been deprecated and replaced by a- codesparameter.
 
- DataFrame.to_stata(),- read_stata(),- StataReaderand- StataWriterhave deprecated the- encodingargument. The encoding of a Stata dta file is determined by the file type and cannot be changed (GH 21244)
- MultiIndex.to_hierarchical()is deprecated and will be removed in a future version (GH 21613)
- Series.ptp()is deprecated. Use- numpy.ptpinstead (GH 21614)
- Series.compress()is deprecated. Use- Series[condition]instead (GH 18262)
- The signature of - Series.to_csv()has been uniformed to that of- DataFrame.to_csv(): the name of the first argument is now- path_or_buf, the order of subsequent arguments has changed, the- headerargument now defaults to- True. (GH 19715)
- Categorical.from_codes()has deprecated providing float values for the- codesargument. (GH 21767)
- pandas.read_table()is deprecated. Instead, use- read_csv()passing- sep='\t'if necessary. This deprecation has been removed in 0.25.0. (GH 21948)
- Series.str.cat()has deprecated using arbitrary list-likes within list-likes. A list-like container may still contain many- Series,- Indexor 1-dimensional- np.ndarray, or alternatively, only scalar values. (GH 21950)
- FrozenNDArray.searchsorted()has deprecated the- vparameter in favor of- value(GH 14645)
- DatetimeIndex.shift()and- PeriodIndex.shift()now accept- periodsargument instead of- nfor consistency with- Index.shift()and- Series.shift(). Using- nthrows a deprecation warning (GH 22458, GH 22912)
- The - fastpathkeyword of the different Index constructors is deprecated (GH 23110).
- Timestamp.tz_localize(),- DatetimeIndex.tz_localize(), and- Series.tz_localize()have deprecated the- errorsargument in favor of the- nonexistentargument (GH 8917)
- The class - FrozenNDArrayhas been deprecated. When unpickling,- FrozenNDArraywill be unpickled to- np.ndarrayonce this class is removed (GH 9031)
- The methods - DataFrame.update()and- Panel.update()have deprecated the- raise_conflict=False|Truekeyword in favor of- errors='ignore'|'raise'(GH 23585)
- The methods - Series.str.partition()and- Series.str.rpartition()have deprecated the- patkeyword in favor of- sep(GH 22676)
- Deprecated the - nthreadskeyword of- pandas.read_feather()in favor of- use_threadsto reflect the changes in- pyarrow>=0.11.0. (GH 23053)
- pandas.read_excel()has deprecated accepting- usecolsas an integer. Please pass in a list of ints from 0 to- usecolsinclusive instead (GH 23527)
- Constructing a - TimedeltaIndexfrom data with- datetime64-dtyped data is deprecated, will raise- TypeErrorin a future version (GH 23539)
- Constructing a - DatetimeIndexfrom data with- timedelta64-dtyped data is deprecated, will raise- TypeErrorin a future version (GH 23675)
- The - keep_tz=Falseoption (the default) of the- keep_tzkeyword of- DatetimeIndex.to_series()is deprecated (GH 17832).
- Timezone converting a tz-aware - datetime.datetimeor- Timestampwith- Timestampand the- tzargument is now deprecated. Instead, use- Timestamp.tz_convert()(GH 23579)
- pandas.api.types.is_period()is deprecated in favor of- pandas.api.types.is_period_dtype(GH 23917)
- pandas.api.types.is_datetimetz()is deprecated in favor of- pandas.api.types.is_datetime64tz(GH 23917)
- Creating a - TimedeltaIndex,- DatetimeIndex, or- PeriodIndexby passing range arguments- start,- end, and- periodsis deprecated in favor of- timedelta_range(),- date_range(), or- period_range()(GH 23919)
- Passing a string alias like - 'datetime64[ns, UTC]'as the- unitparameter to- DatetimeTZDtypeis deprecated. Use- DatetimeTZDtype.construct_from_stringinstead (GH 23990).
- The - skipnaparameter of- infer_dtype()will switch to- Trueby default in a future version of pandas (GH 17066, GH 24050)
- In - Series.where()with Categorical data, providing an- otherthat is not present in the categories is deprecated. Convert the categorical to a different dtype or add the- otherto the categories first (GH 24077).
- Series.clip_lower(),- Series.clip_upper(),- DataFrame.clip_lower()and- DataFrame.clip_upper()are deprecated and will be removed in a future version. Use- Series.clip(lower=threshold),- Series.clip(upper=threshold)and the equivalent- DataFramemethods (GH 24203)
- Series.nonzero()is deprecated and will be removed in a future version (GH 18262)
- Passing an integer to - Series.fillna()and- DataFrame.fillna()with- timedelta64[ns]dtypes is deprecated, will raise- TypeErrorin a future version. Use- obj.fillna(pd.Timedelta(...))instead (GH 24694)
- Series.cat.categorical,- Series.cat.nameand- Series.cat.indexhave been deprecated. Use the attributes on- Series.cator- Seriesdirectly. (GH 24751).
- Passing a dtype without a precision like - np.dtype('datetime64')or- timedelta64to- Index,- DatetimeIndexand- TimedeltaIndexis now deprecated. Use the nanosecond-precision dtype instead (GH 24753).
Integer addition/subtraction with datetimes and timedeltas is deprecated#
In the past, users could—in some cases—add or subtract integers or integer-dtype
arrays from Timestamp, DatetimeIndex and TimedeltaIndex.
This usage is now deprecated.  Instead add or subtract integer multiples of
the object’s freq attribute (GH 21939, GH 23878).
Previous behavior:
In [5]: ts = pd.Timestamp('1994-05-06 12:15:16', freq=pd.offsets.Hour())
In [6]: ts + 2
Out[6]: Timestamp('1994-05-06 14:15:16', freq='H')
In [7]: tdi = pd.timedelta_range('1D', periods=2)
In [8]: tdi - np.array([2, 1])
Out[8]: TimedeltaIndex(['-1 days', '1 days'], dtype='timedelta64[ns]', freq=None)
In [9]: dti = pd.date_range('2001-01-01', periods=2, freq='7D')
In [10]: dti + pd.Index([1, 2])
Out[10]: DatetimeIndex(['2001-01-08', '2001-01-22'], dtype='datetime64[ns]', freq=None)
New behavior:
In [98]: ts = pd.Timestamp('1994-05-06 12:15:16', freq=pd.offsets.Hour())
---------------------------------------------------------------------------
TypeError                                 Traceback (most recent call last)
Cell In[98], line 1
----> 1 ts = pd.Timestamp('1994-05-06 12:15:16', freq=pd.offsets.Hour())
File timestamps.pyx:1755, in pandas._libs.tslibs.timestamps.Timestamp.__new__()
TypeError: __new__() got an unexpected keyword argument 'freq'
In [99]: ts + 2 * ts.freq
---------------------------------------------------------------------------
AttributeError                            Traceback (most recent call last)
Cell In[99], line 1
----> 1 ts + 2 * ts.freq
AttributeError: 'Timestamp' object has no attribute 'freq'
In [100]: tdi = pd.timedelta_range('1D', periods=2)
In [101]: tdi - np.array([2 * tdi.freq, 1 * tdi.freq])
Out[101]: Index([-1 days +00:00:00, 1 days 00:00:00], dtype='object')
In [102]: dti = pd.date_range('2001-01-01', periods=2, freq='7D')
In [103]: dti + pd.Index([1 * dti.freq, 2 * dti.freq])
Out[103]: Index([2001-01-08 00:00:00, 2001-01-22 00:00:00], dtype='object')
Passing integer data and a timezone to DatetimeIndex#
The behavior of DatetimeIndex when passed integer data and
a timezone is changing in a future version of pandas. Previously, these
were interpreted as wall times in the desired timezone. In the future,
these will be interpreted as wall times in UTC, which are then converted
to the desired timezone (GH 24559).
The default behavior remains the same, but issues a warning:
In [3]: pd.DatetimeIndex([946684800000000000], tz="US/Central")
/bin/ipython:1: FutureWarning:
    Passing integer-dtype data and a timezone to DatetimeIndex. Integer values
    will be interpreted differently in a future version of pandas. Previously,
    these were viewed as datetime64[ns] values representing the wall time
    *in the specified timezone*. In the future, these will be viewed as
    datetime64[ns] values representing the wall time *in UTC*. This is similar
    to a nanosecond-precision UNIX epoch. To accept the future behavior, use
        pd.to_datetime(integer_data, utc=True).tz_convert(tz)
    To keep the previous behavior, use
        pd.to_datetime(integer_data).tz_localize(tz)
 #!/bin/python3
 Out[3]: DatetimeIndex(['2000-01-01 00:00:00-06:00'], dtype='datetime64[ns, US/Central]', freq=None)
As the warning message explains, opt in to the future behavior by specifying that the integer values are UTC, and then converting to the final timezone:
In [104]: pd.to_datetime([946684800000000000], utc=True).tz_convert('US/Central')
Out[104]: DatetimeIndex(['1999-12-31 18:00:00-06:00'], dtype='datetime64[ns, US/Central]', freq=None)
The old behavior can be retained with by localizing directly to the final timezone:
In [105]: pd.to_datetime([946684800000000000]).tz_localize('US/Central')
Out[105]: DatetimeIndex(['2000-01-01 00:00:00-06:00'], dtype='datetime64[ns, US/Central]', freq=None)
Converting timezone-aware Series and Index to NumPy arrays#
The conversion from a Series or Index with timezone-aware
datetime data will change to preserve timezones by default (GH 23569).
NumPy doesn’t have a dedicated dtype for timezone-aware datetimes.
In the past, converting a Series or DatetimeIndex with
timezone-aware datatimes would convert to a NumPy array by
- converting the tz-aware data to UTC 
- dropping the timezone-info 
- returning a - numpy.ndarraywith- datetime64[ns]dtype
Future versions of pandas will preserve the timezone information by returning an
object-dtype NumPy array where each value is a Timestamp with the correct
timezone attached
In [106]: ser = pd.Series(pd.date_range('2000', periods=2, tz="CET"))
In [107]: ser
Out[107]: 
0   2000-01-01 00:00:00+01:00
1   2000-01-02 00:00:00+01:00
Length: 2, dtype: datetime64[ns, CET]
The default behavior remains the same, but issues a warning
In [8]: np.asarray(ser)
/bin/ipython:1: FutureWarning: Converting timezone-aware DatetimeArray to timezone-naive
      ndarray with 'datetime64[ns]' dtype. In the future, this will return an ndarray
      with 'object' dtype where each element is a 'pandas.Timestamp' with the correct 'tz'.
        To accept the future behavior, pass 'dtype=object'.
        To keep the old behavior, pass 'dtype="datetime64[ns]"'.
  #!/bin/python3
Out[8]:
array(['1999-12-31T23:00:00.000000000', '2000-01-01T23:00:00.000000000'],
      dtype='datetime64[ns]')
The previous or future behavior can be obtained, without any warnings, by specifying
the dtype
Previous behavior
In [108]: np.asarray(ser, dtype='datetime64[ns]')
Out[108]: 
array(['1999-12-31T23:00:00.000000000', '2000-01-01T23:00:00.000000000'],
      dtype='datetime64[ns]')
Future behavior
# New behavior
In [109]: np.asarray(ser, dtype=object)
Out[109]: 
array([Timestamp('2000-01-01 00:00:00+0100', tz='CET'),
       Timestamp('2000-01-02 00:00:00+0100', tz='CET')], dtype=object)
Or by using Series.to_numpy()
In [110]: ser.to_numpy()
Out[110]: 
array([Timestamp('2000-01-01 00:00:00+0100', tz='CET'),
       Timestamp('2000-01-02 00:00:00+0100', tz='CET')], dtype=object)
In [111]: ser.to_numpy(dtype="datetime64[ns]")
Out[111]: 
array(['1999-12-31T23:00:00.000000000', '2000-01-01T23:00:00.000000000'],
      dtype='datetime64[ns]')
All the above applies to a DatetimeIndex with tz-aware values as well.
Removal of prior version deprecations/changes#
- The - LongPaneland- WidePanelclasses have been removed (GH 10892)
- Series.repeat()has renamed the- repsargument to- repeats(GH 14645)
- Several private functions were removed from the (non-public) module - pandas.core.common(GH 22001)
- Removal of the previously deprecated module - pandas.core.datetools(GH 14105, GH 14094)
- Strings passed into - DataFrame.groupby()that refer to both column and index levels will raise a- ValueError(GH 14432)
- Index.repeat()and- MultiIndex.repeat()have renamed the- nargument to- repeats(GH 14645)
- The - Seriesconstructor and- .astypemethod will now raise a- ValueErrorif timestamp dtypes are passed in without a unit (e.g.- np.datetime64) for the- dtypeparameter (GH 15987)
- Removal of the previously deprecated - as_indexerkeyword completely from- str.match()(GH 22356, GH 6581)
- The modules - pandas.types,- pandas.computation, and- pandas.util.decoratorshave been removed (GH 16157, GH 16250)
- Removed the - pandas.formats.styleshim for- pandas.io.formats.style.Styler(GH 16059)
- pandas.pnow,- pandas.match,- pandas.groupby,- pd.get_store,- pd.Expr, and- pd.Termhave been removed (GH 15538, GH 15940)
- Categorical.searchsorted()and- Series.searchsorted()have renamed the- vargument to- value(GH 14645)
- pandas.parser,- pandas.lib, and- pandas.tslibhave been removed (GH 15537)
- Index.searchsorted()have renamed the- keyargument to- value(GH 14645)
- DataFrame.consolidateand- Series.consolidatehave been removed (GH 15501)
- Removal of the previously deprecated module - pandas.json(GH 19944)
- The module - pandas.toolshas been removed (GH 15358, GH 16005)
- SparseArray.get_values()and- SparseArray.to_dense()have dropped the- fillparameter (GH 14686)
- DataFrame.sortleveland- Series.sortlevelhave been removed (GH 15099)
- SparseSeries.to_dense()has dropped the- sparse_onlyparameter (GH 14686)
- DataFrame.astype()and- Series.astype()have renamed the- raise_on_errorargument to- errors(GH 14967)
- is_sequence,- is_any_int_dtype, and- is_floating_dtypehave been removed from- pandas.api.types(GH 16163, GH 16189)
Performance improvements#
- Slicing Series and DataFrames with an monotonically increasing - CategoricalIndexis now very fast and has speed comparable to slicing with an- Int64Index. The speed increase is both when indexing by label (using .loc) and position(.iloc) (GH 20395) Slicing a monotonically increasing- CategoricalIndexitself (i.e.- ci[1000:2000]) shows similar speed improvements as above (GH 21659)
- Improved performance of - CategoricalIndex.equals()when comparing to another- CategoricalIndex(GH 24023)
- Improved performance of - Series.describe()in case of numeric dtpyes (GH 21274)
- Improved performance of - pandas.core.groupby.GroupBy.rank()when dealing with tied rankings (GH 21237)
- Improved performance of - DataFrame.set_index()with columns consisting of- Periodobjects (GH 21582, GH 21606)
- Improved performance of - Series.at()and- Index.get_value()for Extension Arrays values (e.g.- Categorical) (GH 24204)
- Improved performance of membership checks in - Categoricaland- CategoricalIndex(i.e.- x in cat-style checks are much faster).- CategoricalIndex.contains()is likewise much faster (GH 21369, GH 21508)
- Improved performance of - HDFStore.groups()(and dependent functions like- HDFStore.keys(). (i.e.- x in storechecks are much faster) (GH 21372)
- Improved the performance of - pandas.get_dummies()with- sparse=True(GH 21997)
- Improved performance of - IndexEngine.get_indexer_non_unique()for sorted, non-unique indexes (GH 9466)
- Improved performance of - PeriodIndex.unique()(GH 23083)
- Improved performance of - concat()for- Seriesobjects (GH 23404)
- Improved performance of - DatetimeIndex.normalize()and- Timestamp.normalize()for timezone naive or UTC datetimes (GH 23634)
- Improved performance of - DatetimeIndex.tz_localize()and various- DatetimeIndexattributes with dateutil UTC timezone (GH 23772)
- Fixed a performance regression on Windows with Python 3.7 of - read_csv()(GH 23516)
- Improved performance of - Categoricalconstructor for- Seriesobjects (GH 23814)
- Improved performance of - where()for Categorical data (GH 24077)
- Improved performance of iterating over a - Series. Using- DataFrame.itertuples()now creates iterators without internally allocating lists of all elements (GH 20783)
- Improved performance of - Periodconstructor, additionally benefitting- PeriodArrayand- PeriodIndexcreation (GH 24084, GH 24118)
- Improved performance of tz-aware - DatetimeArraybinary operations (GH 24491)
Bug fixes#
Categorical#
- Bug in - Categorical.from_codes()where- NaNvalues in- codeswere silently converted to- 0(GH 21767). In the future this will raise a- ValueError. Also changes the behavior of- .from_codes([1.1, 2.0]).
- Bug in - Categorical.sort_values()where- NaNvalues were always positioned in front regardless of- na_positionvalue. (GH 22556).
- Bug when indexing with a boolean-valued - Categorical. Now a boolean-valued- Categoricalis treated as a boolean mask (GH 22665)
- Constructing a - CategoricalIndexwith empty values and boolean categories was raising a- ValueErrorafter a change to dtype coercion (GH 22702).
- Bug in - Categorical.take()with a user-provided- fill_valuenot encoding the- fill_value, which could result in a- ValueError, incorrect results, or a segmentation fault (GH 23296).
- In - Series.unstack(), specifying a- fill_valuenot present in the categories now raises a- TypeErrorrather than ignoring the- fill_value(GH 23284)
- Bug when resampling - DataFrame.resample()and aggregating on categorical data, the categorical dtype was getting lost. (GH 23227)
- Bug in many methods of the - .str-accessor, which always failed on calling the- CategoricalIndex.strconstructor (GH 23555, GH 23556)
- Bug in - Series.where()losing the categorical dtype for categorical data (GH 24077)
- Bug in - Categorical.apply()where- NaNvalues could be handled unpredictably. They now remain unchanged (GH 24241)
- Bug in - Categoricalcomparison methods incorrectly raising- ValueErrorwhen operating against a- DataFrame(GH 24630)
- Bug in - Categorical.set_categories()where setting fewer new categories with- rename=Truecaused a segmentation fault (GH 24675)
Datetimelike#
- Fixed bug where two - DateOffsetobjects with different- normalizeattributes could evaluate as equal (GH 21404)
- Fixed bug where - Timestamp.resolution()incorrectly returned 1-microsecond- timedeltainstead of 1-nanosecond- Timedelta(GH 21336, GH 21365)
- Bug in - to_datetime()that did not consistently return an- Indexwhen- box=Truewas specified (GH 21864)
- Bug in - DatetimeIndexcomparisons where string comparisons incorrectly raises- TypeError(GH 22074)
- Bug in - DatetimeIndexcomparisons when comparing against- timedelta64[ns]dtyped arrays; in some cases- TypeErrorwas incorrectly raised, in others it incorrectly failed to raise (GH 22074)
- Bug in - DatetimeIndexcomparisons when comparing against object-dtyped arrays (GH 22074)
- Bug in - DataFramewith- datetime64[ns]dtype addition and subtraction with- Timedelta-like objects (GH 22005, GH 22163)
- Bug in - DataFramewith- datetime64[ns]dtype addition and subtraction with- DateOffsetobjects returning an- objectdtype instead of- datetime64[ns]dtype (GH 21610, GH 22163)
- Bug in - DataFramewith- datetime64[ns]dtype comparing against- NaTincorrectly (GH 22242, GH 22163)
- Bug in - DataFramewith- datetime64[ns]dtype subtracting- Timestamp-like object incorrectly returned- datetime64[ns]dtype instead of- timedelta64[ns]dtype (GH 8554, GH 22163)
- Bug in - DataFramewith- datetime64[ns]dtype subtracting- np.datetime64object with non-nanosecond unit failing to convert to nanoseconds (GH 18874, GH 22163)
- Bug in - DataFramecomparisons against- Timestamp-like objects failing to raise- TypeErrorfor inequality checks with mismatched types (GH 8932, GH 22163)
- Bug in - DataFramewith mixed dtypes including- datetime64[ns]incorrectly raising- TypeErroron equality comparisons (GH 13128, GH 22163)
- Bug in - DataFrame.valuesreturning a- DatetimeIndexfor a single-column- DataFramewith tz-aware datetime values. Now a 2-D- numpy.ndarrayof- Timestampobjects is returned (GH 24024)
- Bug in - DataFrame.eq()comparison against- NaTincorrectly returning- Trueor- NaN(GH 15697, GH 22163)
- Bug in - DatetimeIndexsubtraction that incorrectly failed to raise- OverflowError(GH 22492, GH 22508)
- Bug in - DatetimeIndexincorrectly allowing indexing with- Timedeltaobject (GH 20464)
- Bug in - DatetimeIndexwhere frequency was being set if original frequency was- None(GH 22150)
- Bug in rounding methods of - DatetimeIndex(- round(),- ceil(),- floor()) and- Timestamp(- round(),- ceil(),- floor()) could give rise to loss of precision (GH 22591)
- Bug in - to_datetime()with an- Indexargument that would drop the- namefrom the result (GH 21697)
- Bug in - PeriodIndexwhere adding or subtracting a- timedeltaor- Tickobject produced incorrect results (GH 22988)
- Bug in the - Seriesrepr with period-dtype data missing a space before the data (GH 23601)
- Bug in - date_range()when decrementing a start date to a past end date by a negative frequency (GH 23270)
- Bug in - Series.min()which would return- NaNinstead of- NaTwhen called on a series of- NaT(GH 23282)
- Bug in - Series.combine_first()not properly aligning categoricals, so that missing values in- selfwhere not filled by valid values from- other(GH 24147)
- Bug in - DataFrame.combine()with datetimelike values raising a TypeError (GH 23079)
- Bug in - date_range()with frequency of- Dayor higher where dates sufficiently far in the future could wrap around to the past instead of raising- OutOfBoundsDatetime(GH 14187)
- Bug in - period_range()ignoring the frequency of- startand- endwhen those are provided as- Periodobjects (GH 20535).
- Bug in - PeriodIndexwith attribute- freq.ngreater than 1 where adding a- DateOffsetobject would return incorrect results (GH 23215)
- Bug in - Seriesthat interpreted string indices as lists of characters when setting datetimelike values (GH 23451)
- Bug in - DataFramewhen creating a new column from an ndarray of- Timestampobjects with timezones creating an object-dtype column, rather than datetime with timezone (GH 23932)
- Bug in - Timestampconstructor which would drop the frequency of an input- Timestamp(GH 22311)
- Bug in - DatetimeIndexwhere calling- np.array(dtindex, dtype=object)would incorrectly return an array of- longobjects (GH 23524)
- Bug in - Indexwhere passing a timezone-aware- DatetimeIndexand- dtype=objectwould incorrectly raise a- ValueError(GH 23524)
- Bug in - Indexwhere calling- np.array(dtindex, dtype=object)on a timezone-naive- DatetimeIndexwould return an array of- datetimeobjects instead of- Timestampobjects, potentially losing nanosecond portions of the timestamps (GH 23524)
- Bug in - Categorical.__setitem__not allowing setting with another- Categoricalwhen both are unordered and have the same categories, but in a different order (GH 24142)
- Bug in - date_range()where using dates with millisecond resolution or higher could return incorrect values or the wrong number of values in the index (GH 24110)
- Bug in - DatetimeIndexwhere constructing a- DatetimeIndexfrom a- Categoricalor- CategoricalIndexwould incorrectly drop timezone information (GH 18664)
- Bug in - DatetimeIndexand- TimedeltaIndexwhere indexing with- Ellipsiswould incorrectly lose the index’s- freqattribute (GH 21282)
- Clarified error message produced when passing an incorrect - freqargument to- DatetimeIndexwith- NaTas the first entry in the passed data (GH 11587)
- Bug in - to_datetime()where- boxand- utcarguments were ignored when passing a- DataFrameor- dictof unit mappings (GH 23760)
- Bug in - Series.dtwhere the cache would not update properly after an in-place operation (GH 24408)
- Bug in - PeriodIndexwhere comparisons against an array-like object with length 1 failed to raise- ValueError(GH 23078)
- Bug in - DatetimeIndex.astype(),- PeriodIndex.astype()and- TimedeltaIndex.astype()ignoring the sign of the- dtypefor unsigned integer dtypes (GH 24405).
- Fixed bug in - Series.max()with- datetime64[ns]-dtype failing to return- NaTwhen nulls are present and- skipna=Falseis passed (GH 24265)
- Bug in - to_datetime()where arrays of- datetimeobjects containing both timezone-aware and timezone-naive- datetimeswould fail to raise- ValueError(GH 24569)
- Bug in - to_datetime()with invalid datetime format doesn’t coerce input to- NaTeven if- errors='coerce'(GH 24763)
Timedelta#
- Bug in - DataFramewith- timedelta64[ns]dtype division by- Timedelta-like scalar incorrectly returning- timedelta64[ns]dtype instead of- float64dtype (GH 20088, GH 22163)
- Bug in adding a - Indexwith object dtype to a- Serieswith- timedelta64[ns]dtype incorrectly raising (GH 22390)
- Bug in multiplying a - Serieswith numeric dtype against a- timedeltaobject (GH 22390)
- Bug in - Serieswith numeric dtype when adding or subtracting an array or- Serieswith- timedelta64dtype (GH 22390)
- Bug in - Indexwith numeric dtype when multiplying or dividing an array with dtype- timedelta64(GH 22390)
- Bug in - TimedeltaIndexincorrectly allowing indexing with- Timestampobject (GH 20464)
- Fixed bug where subtracting - Timedeltafrom an object-dtyped array would raise- TypeError(GH 21980)
- Fixed bug in adding a - DataFramewith all-timedelta64[ns] dtypes to a- DataFramewith all-integer dtypes returning incorrect results instead of raising- TypeError(GH 22696)
- Bug in - TimedeltaIndexwhere adding a timezone-aware datetime scalar incorrectly returned a timezone-naive- DatetimeIndex(GH 23215)
- Bug in - TimedeltaIndexwhere adding- np.timedelta64('NaT')incorrectly returned an all-- NaT- DatetimeIndexinstead of an all-- NaT- TimedeltaIndex(GH 23215)
- Bug in - Timedeltaand- to_timedelta()have inconsistencies in supported unit string (GH 21762)
- Bug in - TimedeltaIndexdivision where dividing by another- TimedeltaIndexraised- TypeErrorinstead of returning a- Float64Index(GH 23829, GH 22631)
- Bug in - TimedeltaIndexcomparison operations where comparing against non-- Timedelta-like objects would raise- TypeErrorinstead of returning all-- Falsefor- __eq__and all-- Truefor- __ne__(GH 24056)
- Bug in - Timedeltacomparisons when comparing with a- Tickobject incorrectly raising- TypeError(GH 24710)
Timezones#
- Bug in - Index.shift()where an- AssertionErrorwould raise when shifting across DST (GH 8616)
- Bug in - Timestampconstructor where passing an invalid timezone offset designator (- Z) would not raise a- ValueError(GH 8910)
- Bug in - Timestamp.replace()where replacing at a DST boundary would retain an incorrect offset (GH 7825)
- Bug in - Series.replace()with- datetime64[ns, tz]data when replacing- NaT(GH 11792)
- Bug in - Timestampwhen passing different string date formats with a timezone offset would produce different timezone offsets (GH 12064)
- Bug when comparing a tz-naive - Timestampto a tz-aware- DatetimeIndexwhich would coerce the- DatetimeIndexto tz-naive (GH 12601)
- Bug in - Series.truncate()with a tz-aware- DatetimeIndexwhich would cause a core dump (GH 9243)
- Bug in - Seriesconstructor which would coerce tz-aware and tz-naive- Timestampto tz-aware (GH 13051)
- Bug in - Indexwith- datetime64[ns, tz]dtype that did not localize integer data correctly (GH 20964)
- Bug in - DatetimeIndexwhere constructing with an integer and tz would not localize correctly (GH 12619)
- Fixed bug where - DataFrame.describe()and- Series.describe()on tz-aware datetimes did not show- firstand- lastresult (GH 21328)
- Bug in - DatetimeIndexcomparisons failing to raise- TypeErrorwhen comparing timezone-aware- DatetimeIndexagainst- np.datetime64(GH 22074)
- Bug in - DataFrameassignment with a timezone-aware scalar (GH 19843)
- Bug in - DataFrame.asof()that raised a- TypeErrorwhen attempting to compare tz-naive and tz-aware timestamps (GH 21194)
- Bug when constructing a - DatetimeIndexwith- Timestampconstructed with the- replacemethod across DST (GH 18785)
- Bug when setting a new value with - DataFrame.loc()with a- DatetimeIndexwith a DST transition (GH 18308, GH 20724)
- Bug in - Index.unique()that did not re-localize tz-aware dates correctly (GH 21737)
- Bug in - DataFrame.resample()and- Series.resample()where an- AmbiguousTimeErroror- NonExistentTimeErrorwould raise if a timezone aware timeseries ended on a DST transition (GH 19375, GH 10117)
- Bug in - DataFrame.drop()and- Series.drop()when specifying a tz-aware Timestamp key to drop from a- DatetimeIndexwith a DST transition (GH 21761)
- Bug in - DatetimeIndexconstructor where- NaTand- dateutil.tz.tzlocalwould raise an- OutOfBoundsDatetimeerror (GH 23807)
- Bug in - DatetimeIndex.tz_localize()and- Timestamp.tz_localize()with- dateutil.tz.tzlocalnear a DST transition that would return an incorrectly localized datetime (GH 23807)
- Bug in - Timestampconstructor where a- dateutil.tz.tzutctimezone passed with a- datetime.datetimeargument would be converted to a- pytz.UTCtimezone (GH 23807)
- Bug in - to_datetime()where- utc=Truewas not respected when specifying a- unitand- errors='ignore'(GH 23758)
- Bug in - to_datetime()where- utc=Truewas not respected when passing a- Timestamp(GH 24415)
- Bug in - DataFrame.any()returns wrong value when- axis=1and the data is of datetimelike type (GH 23070)
- Bug in - DatetimeIndex.to_period()where a timezone aware index was converted to UTC first before creating- PeriodIndex(GH 22905)
- Bug in - DataFrame.tz_localize(),- DataFrame.tz_convert(),- Series.tz_localize(), and- Series.tz_convert()where- copy=Falsewould mutate the original argument inplace (GH 6326)
- Bug in - DataFrame.max()and- DataFrame.min()with- axis=1where a- Serieswith- NaNwould be returned when all columns contained the same timezone (GH 10390)
Offsets#
- Bug in - FY5253where date offsets could incorrectly raise an- AssertionErrorin arithmetic operations (GH 14774)
- Bug in - DateOffsetwhere keyword arguments- weekand- millisecondswere accepted and ignored. Passing these will now raise- ValueError(GH 19398)
- Bug in adding - DateOffsetwith- DataFrameor- PeriodIndexincorrectly raising- TypeError(GH 23215)
- Bug in comparing - DateOffsetobjects with non-DateOffset objects, particularly strings, raising- ValueErrorinstead of returning- Falsefor equality checks and- Truefor not-equal checks (GH 23524)
Numeric#
- Bug in - Series- __rmatmul__doesn’t support matrix vector multiplication (GH 21530)
- Bug in - factorize()fails with read-only array (GH 12813)
- Fixed bug in - unique()handled signed zeros inconsistently: for some inputs 0.0 and -0.0 were treated as equal and for some inputs as different. Now they are treated as equal for all inputs (GH 21866)
- Bug in - DataFrame.agg(),- DataFrame.transform()and- DataFrame.apply()where, when supplied with a list of functions and- axis=1(e.g.- df.apply(['sum', 'mean'], axis=1)), a- TypeErrorwas wrongly raised. For all three methods such calculation are now done correctly. (GH 16679).
- Bug in - Seriescomparison against datetime-like scalars and arrays (GH 22074)
- Bug in - DataFramemultiplication between boolean dtype and integer returning- objectdtype instead of integer dtype (GH 22047, GH 22163)
- Bug in - DataFrame.apply()where, when supplied with a string argument and additional positional or keyword arguments (e.g.- df.apply('sum', min_count=1)), a- TypeErrorwas wrongly raised (GH 22376)
- Bug in - DataFrame.astype()to extension dtype may raise- AttributeError(GH 22578)
- Bug in - DataFramewith- timedelta64[ns]dtype arithmetic operations with- ndarraywith integer dtype incorrectly treating the narray as- timedelta64[ns]dtype (GH 23114)
- Bug in - Series.rpow()with object dtype- NaNfor- 1 ** NAinstead of- 1(GH 22922).
- Series.agg()can now handle numpy NaN-aware methods like- numpy.nansum()(GH 19629)
- Bug in - Series.rank()and- DataFrame.rank()when- pct=Trueand more than 224 rows are present resulted in percentages greater than 1.0 (GH 18271)
- Calls such as - DataFrame.round()with a non-unique- CategoricalIndex()now return expected data. Previously, data would be improperly duplicated (GH 21809).
- Added - log10,- floorand- ceilto the list of supported functions in- DataFrame.eval()(GH 24139, GH 24353)
- Logical operations - &, |, ^between- Seriesand- Indexwill no longer raise- ValueError(GH 22092)
- Checking PEP 3141 numbers in - is_scalar()function returns- True(GH 22903)
- Reduction methods like - Series.sum()now accept the default value of- keepdims=Falsewhen called from a NumPy ufunc, rather than raising a- TypeError. Full support for- keepdimshas not been implemented (GH 24356).
Conversion#
- Bug in - DataFrame.combine_first()in which column types were unexpectedly converted to float (GH 20699)
- Bug in - DataFrame.clip()in which column types are not preserved and casted to float (GH 24162)
- Bug in - DataFrame.clip()when order of columns of dataframes doesn’t match, result observed is wrong in numeric values (GH 20911)
- Bug in - DataFrame.astype()where converting to an extension dtype when duplicate column names are present causes a- RecursionError(GH 24704)
Strings#
- Bug in - Index.str.partition()was not nan-safe (GH 23558).
- Bug in - Index.str.split()was not nan-safe (GH 23677).
- Bug - Series.str.contains()not respecting the- naargument for a- Categoricaldtype- Series(GH 22158)
- Bug in - Index.str.cat()when the result contained only- NaN(GH 24044)
Interval#
- Bug in the - IntervalIndexconstructor where the- closedparameter did not always override the inferred- closed(GH 19370)
- Bug in the - IntervalIndexrepr where a trailing comma was missing after the list of intervals (GH 20611)
- Bug in - Intervalwhere scalar arithmetic operations did not retain the- closedvalue (GH 22313)
- Bug in - IntervalIndexwhere indexing with datetime-like values raised a- KeyError(GH 20636)
- Bug in - IntervalTreewhere data containing- NaNtriggered a warning and resulted in incorrect indexing queries with- IntervalIndex(GH 23352)
Indexing#
- Bug in - DataFrame.ne()fails if columns contain column name “dtype” (GH 22383)
- The traceback from a - KeyErrorwhen asking- .locfor a single missing label is now shorter and more clear (GH 21557)
- PeriodIndexnow emits a- KeyErrorwhen a malformed string is looked up, which is consistent with the behavior of- DatetimeIndex(GH 22803)
- When - .ixis asked for a missing integer label in a- MultiIndexwith a first level of integer type, it now raises a- KeyError, consistently with the case of a flat- Int64Index, rather than falling back to positional indexing (GH 21593)
- Bug in - Index.reindex()when reindexing a tz-naive and tz-aware- DatetimeIndex(GH 8306)
- Bug in - Series.reindex()when reindexing an empty series with a- datetime64[ns, tz]dtype (GH 20869)
- Bug in - DataFramewhen setting values with- .locand a timezone aware- DatetimeIndex(GH 11365)
- DataFrame.__getitem__now accepts dictionaries and dictionary keys as list-likes of labels, consistently with- Series.__getitem__(GH 21294)
- Fixed - DataFrame[np.nan]when columns are non-unique (GH 21428)
- Bug when indexing - DatetimeIndexwith nanosecond resolution dates and timezones (GH 11679)
- Bug where indexing with a Numpy array containing negative values would mutate the indexer (GH 21867) 
- Bug where mixed indexes wouldn’t allow integers for - .at(GH 19860)
- Float64Index.get_locnow raises- KeyErrorwhen boolean key passed. (GH 19087)
- Bug in - DataFrame.loc()when indexing with an- IntervalIndex(GH 19977)
- Indexno longer mangles- None,- NaNand- NaT, i.e. they are treated as three different keys. However, for numeric Index all three are still coerced to a- NaN(GH 22332)
- Bug in - scalar in Indexif scalar is a float while the- Indexis of integer dtype (GH 22085)
- Bug in - MultiIndex.set_levels()when levels value is not subscriptable (GH 23273)
- Bug where setting a timedelta column by - Indexcauses it to be casted to double, and therefore lose precision (GH 23511)
- Bug in - Index.union()and- Index.intersection()where name of the- Indexof the result was not computed correctly for certain cases (GH 9943, GH 9862)
- Bug in - Indexslicing with boolean- Indexmay raise- TypeError(GH 22533)
- Bug in - PeriodArray.__setitem__when accepting slice and list-like value (GH 23978)
- Bug in - DatetimeIndex,- TimedeltaIndexwhere indexing with- Ellipsiswould lose their- freqattribute (GH 21282)
- Bug in - iatwhere using it to assign an incompatible value would create a new column (GH 23236)
Missing#
- Bug in - DataFrame.fillna()where a- ValueErrorwould raise when one column contained a- datetime64[ns, tz]dtype (GH 15522)
- Bug in - Series.hasnans()that could be incorrectly cached and return incorrect answers if null elements are introduced after an initial call (GH 19700)
- Series.isin()now treats all NaN-floats as equal also for- np.object_-dtype. This behavior is consistent with the behavior for float64 (GH 22119)
- unique()no longer mangles NaN-floats and the- NaT-object for- np.object_-dtype, i.e.- NaTis no longer coerced to a NaN-value and is treated as a different entity. (GH 22295)
- DataFrameand- Seriesnow properly handle numpy masked arrays with hardened masks. Previously, constructing a DataFrame or Series from a masked array with a hard mask would create a pandas object containing the underlying value, rather than the expected NaN. (GH 24574)
- Bug in - DataFrameconstructor where- dtypeargument was not honored when handling numpy masked record arrays. (GH 24874)
MultiIndex#
- Bug in - io.formats.style.Styler.applymap()where- subset=with- MultiIndexslice would reduce to- Series(GH 19861)
- Removed compatibility for - MultiIndexpickles prior to version 0.8.0; compatibility with- MultiIndexpickles from version 0.13 forward is maintained (GH 21654)
- MultiIndex.get_loc_level()(and as a consequence,- .locon a- Seriesor- DataFramewith a- MultiIndexindex) will now raise a- KeyError, rather than returning an empty- slice, if asked a label which is present in the- levelsbut is unused (GH 22221)
- MultiIndexhas gained the- MultiIndex.from_frame(), it allows constructing a- MultiIndexobject from a- DataFrame(GH 22420)
- Fix - TypeErrorin Python 3 when creating- MultiIndexin which some levels have mixed types, e.g. when some labels are tuples (GH 15457)
IO#
- Bug in - read_csv()in which a column specified with- CategoricalDtypeof boolean categories was not being correctly coerced from string values to booleans (GH 20498)
- Bug in - read_csv()in which unicode column names were not being properly recognized with Python 2.x (GH 13253)
- Bug in - DataFrame.to_sql()when writing timezone aware data (- datetime64[ns, tz]dtype) would raise a- TypeError(GH 9086)
- Bug in - DataFrame.to_sql()where a naive- DatetimeIndexwould be written as- TIMESTAMP WITH TIMEZONEtype in supported databases, e.g. PostgreSQL (GH 23510)
- Bug in - read_excel()when- parse_colsis specified with an empty dataset (GH 9208)
- read_html()no longer ignores all-whitespace- <tr>within- <thead>when considering the- skiprowsand- headerarguments. Previously, users had to decrease their- headerand- skiprowsvalues on such tables to work around the issue. (GH 21641)
- read_excel()will correctly show the deprecation warning for previously deprecated- sheetname(GH 17994)
- read_csv()and- read_table()will throw- UnicodeErrorand not coredump on badly encoded strings (GH 22748)
- read_csv()will correctly parse timezone-aware datetimes (GH 22256)
- Bug in - read_csv()in which memory management was prematurely optimized for the C engine when the data was being read in chunks (GH 23509)
- Bug in - read_csv()in unnamed columns were being improperly identified when extracting a multi-index (GH 23687)
- read_sas()will parse numbers in sas7bdat-files that have width less than 8 bytes correctly. (GH 21616)
- read_sas()will correctly parse sas7bdat files with many columns (GH 22628)
- read_sas()will correctly parse sas7bdat files with data page types having also bit 7 set (so page type is 128 + 256 = 384) (GH 16615)
- Bug in - read_sas()in which an incorrect error was raised on an invalid file format. (GH 24548)
- Bug in - detect_client_encoding()where potential- IOErrorgoes unhandled when importing in a mod_wsgi process due to restricted access to stdout. (GH 21552)
- Bug in - DataFrame.to_html()with- index=Falsemisses truncation indicators (…) on truncated DataFrame (GH 15019, GH 22783)
- Bug in - DataFrame.to_html()with- index=Falsewhen both columns and row index are- MultiIndex(GH 22579)
- Bug in - DataFrame.to_html()with- index_names=Falsedisplaying index name (GH 22747)
- Bug in - DataFrame.to_html()with- header=Falsenot displaying row index names (GH 23788)
- Bug in - DataFrame.to_html()with- sparsify=Falsethat caused it to raise- TypeError(GH 22887)
- Bug in - DataFrame.to_string()that broke column alignment when- index=Falseand width of first column’s values is greater than the width of first column’s header (GH 16839, GH 13032)
- Bug in - DataFrame.to_string()that caused representations of- DataFrameto not take up the whole window (GH 22984)
- Bug in - DataFrame.to_csv()where a single level MultiIndex incorrectly wrote a tuple. Now just the value of the index is written (GH 19589).
- HDFStorewill raise- ValueErrorwhen the- formatkwarg is passed to the constructor (GH 13291)
- Bug in - HDFStore.append()when appending a- DataFramewith an empty string column and- min_itemsize< 8 (GH 12242)
- Bug in - read_csv()in which memory leaks occurred in the C engine when parsing- NaNvalues due to insufficient cleanup on completion or error (GH 21353)
- Bug in - read_csv()in which incorrect error messages were being raised when- skipfooterwas passed in along with- nrows,- iterator, or- chunksize(GH 23711)
- Bug in - read_csv()in which- MultiIndexindex names were being improperly handled in the cases when they were not provided (GH 23484)
- Bug in - read_csv()in which unnecessary warnings were being raised when the dialect’s values conflicted with the default arguments (GH 23761)
- Bug in - read_html()in which the error message was not displaying the valid flavors when an invalid one was provided (GH 23549)
- Bug in - read_excel()in which extraneous header names were extracted, even though none were specified (GH 11733)
- Bug in - read_excel()in which column names were not being properly converted to string sometimes in Python 2.x (GH 23874)
- Bug in - read_excel()in which- index_col=Nonewas not being respected and parsing index columns anyway (GH 18792, GH 20480)
- Bug in - read_excel()in which- usecolswas not being validated for proper column names when passed in as a string (GH 20480)
- Bug in - DataFrame.to_dict()when the resulting dict contains non-Python scalars in the case of numeric data (GH 23753)
- DataFrame.to_string(),- DataFrame.to_html(),- DataFrame.to_latex()will correctly format output when a string is passed as the- float_formatargument (GH 21625, GH 22270)
- Bug in - read_csv()that caused it to raise- OverflowErrorwhen trying to use ‘inf’ as- na_valuewith integer index column (GH 17128)
- Bug in - read_csv()that caused the C engine on Python 3.6+ on Windows to improperly read CSV filenames with accented or special characters (GH 15086)
- Bug in - read_fwf()in which the compression type of a file was not being properly inferred (GH 22199)
- Bug in - pandas.io.json.json_normalize()that caused it to raise- TypeErrorwhen two consecutive elements of- record_pathare dicts (GH 22706)
- Bug in - DataFrame.to_stata(),- pandas.io.stata.StataWriterand- pandas.io.stata.StataWriter117where a exception would leave a partially written and invalid dta file (GH 23573)
- Bug in - DataFrame.to_stata()and- pandas.io.stata.StataWriter117that produced invalid files when using strLs with non-ASCII characters (GH 23573)
- Bug in - HDFStorethat caused it to raise- ValueErrorwhen reading a Dataframe in Python 3 from fixed format written in Python 2 (GH 24510)
- Bug in - DataFrame.to_string()and more generally in the floating- reprformatter. Zeros were not trimmed if- infwas present in a columns while it was the case with NA values. Zeros are now trimmed as in the presence of NA (GH 24861).
- Bug in the - reprwhen truncating the number of columns and having a wide last column (GH 24849).
Plotting#
- Bug in - DataFrame.plot.scatter()and- DataFrame.plot.hexbin()caused x-axis label and ticklabels to disappear when colorbar was on in IPython inline backend (GH 10611, GH 10678, and GH 20455)
- Bug in plotting a Series with datetimes using - matplotlib.axes.Axes.scatter()(GH 22039)
- Bug in - DataFrame.plot.bar()caused bars to use multiple colors instead of a single one (GH 20585)
- Bug in validating color parameter caused extra color to be appended to the given color array. This happened to multiple plotting functions using matplotlib. (GH 20726) 
GroupBy/resample/rolling#
- Bug in - pandas.core.window.Rolling.min()and- pandas.core.window.Rolling.max()with- closed='left', a datetime-like index and only one entry in the series leading to segfault (GH 24718)
- Bug in - pandas.core.groupby.GroupBy.first()and- pandas.core.groupby.GroupBy.last()with- as_index=Falseleading to the loss of timezone information (GH 15884)
- Bug in - DateFrame.resample()when downsampling across a DST boundary (GH 8531)
- Bug in date anchoring for - DateFrame.resample()with offset- Daywhen n > 1 (GH 24127)
- Bug where - ValueErroris wrongly raised when calling- count()method of a- SeriesGroupBywhen the grouping variable only contains NaNs and numpy version < 1.13 (GH 21956).
- Multiple bugs in - pandas.core.window.Rolling.min()with- closed='left'and a datetime-like index leading to incorrect results and also segfault. (GH 21704)
- Bug in - pandas.core.resample.Resampler.apply()when passing positional arguments to applied func (GH 14615).
- Bug in - Series.resample()when passing- numpy.timedelta64to- loffsetkwarg (GH 7687).
- Bug in - pandas.core.resample.Resampler.asfreq()when frequency of- TimedeltaIndexis a subperiod of a new frequency (GH 13022).
- Bug in - pandas.core.groupby.SeriesGroupBy.mean()when values were integral but could not fit inside of int64, overflowing instead. (GH 22487)
- pandas.core.groupby.RollingGroupby.agg()and- pandas.core.groupby.ExpandingGroupby.agg()now support multiple aggregation functions as parameters (GH 15072)
- Bug in - DataFrame.resample()and- Series.resample()when resampling by a weekly offset (- 'W') across a DST transition (GH 9119, GH 21459)
- Bug in - DataFrame.expanding()in which the- axisargument was not being respected during aggregations (GH 23372)
- Bug in - pandas.core.groupby.GroupBy.transform()which caused missing values when the input function can accept a- DataFramebut renames it (GH 23455).
- Bug in - pandas.core.groupby.GroupBy.nth()where column order was not always preserved (GH 20760)
- Bug in - pandas.core.groupby.GroupBy.rank()with- method='dense'and- pct=Truewhen a group has only one member would raise a- ZeroDivisionError(GH 23666).
- Calling - pandas.core.groupby.GroupBy.rank()with empty groups and- pct=Truewas raising a- ZeroDivisionError(GH 22519)
- Bug in - DataFrame.resample()when resampling- NaTin- TimeDeltaIndex(GH 13223).
- Bug in - DataFrame.groupby()did not respect the- observedargument when selecting a column and instead always used- observed=False(GH 23970)
- Bug in - pandas.core.groupby.SeriesGroupBy.pct_change()or- pandas.core.groupby.DataFrameGroupBy.pct_change()would previously work across groups when calculating the percent change, where it now correctly works per group (GH 21200, GH 21235).
- Bug preventing hash table creation with very large number (2^32) of rows (GH 22805) 
- Bug in groupby when grouping on categorical causes - ValueErrorand incorrect grouping if- observed=Trueand- nanis present in categorical column (GH 24740, GH 21151).
Reshaping#
- Bug in - pandas.concat()when joining resampled DataFrames with timezone aware index (GH 13783)
- Bug in - pandas.concat()when joining only- Seriesthe- namesargument of- concatis no longer ignored (GH 23490)
- Bug in - Series.combine_first()with- datetime64[ns, tz]dtype which would return tz-naive result (GH 21469)
- Bug in - Series.where()and- DataFrame.where()with- datetime64[ns, tz]dtype (GH 21546)
- Bug in - DataFrame.where()with an empty DataFrame and empty- condhaving non-bool dtype (GH 21947)
- Bug in - Series.mask()and- DataFrame.mask()with- listconditionals (GH 21891)
- Bug in - DataFrame.replace()raises RecursionError when converting OutOfBounds- datetime64[ns, tz](GH 20380)
- pandas.core.groupby.GroupBy.rank()now raises a- ValueErrorwhen an invalid value is passed for argument- na_option(GH 22124)
- Bug in - get_dummies()with Unicode attributes in Python 2 (GH 22084)
- Bug in - DataFrame.replace()raises- RecursionErrorwhen replacing empty lists (GH 22083)
- Bug in - Series.replace()and- DataFrame.replace()when dict is used as the- to_replacevalue and one key in the dict is another key’s value, the results were inconsistent between using integer key and using string key (GH 20656)
- Bug in - DataFrame.drop_duplicates()for empty- DataFramewhich incorrectly raises an error (GH 20516)
- Bug in - pandas.wide_to_long()when a string is passed to the stubnames argument and a column name is a substring of that stubname (GH 22468)
- Bug in - merge()when merging- datetime64[ns, tz]data that contained a DST transition (GH 18885)
- Bug in - merge_asof()when merging on float values within defined tolerance (GH 22981)
- Bug in - pandas.concat()when concatenating a multicolumn DataFrame with tz-aware data against a DataFrame with a different number of columns (GH 22796)
- Bug in - merge_asof()where confusing error message raised when attempting to merge with missing values (GH 23189)
- Bug in - DataFrame.nsmallest()and- DataFrame.nlargest()for dataframes that have a- MultiIndexfor columns (GH 23033).
- Bug in - pandas.melt()when passing column names that are not present in- DataFrame(GH 23575)
- Bug in - DataFrame.append()with a- Serieswith a dateutil timezone would raise a- TypeError(GH 23682)
- Bug in - Seriesconstruction when passing no data and- dtype=str(GH 22477)
- Bug in - cut()with- binsas an overlapping- IntervalIndexwhere multiple bins were returned per item instead of raising a- ValueError(GH 23980)
- Bug in - pandas.concat()when joining- Seriesdatetimetz with- Seriescategory would lose timezone (GH 23816)
- Bug in - DataFrame.join()when joining on partial MultiIndex would drop names (GH 20452).
- DataFrame.nlargest()and- DataFrame.nsmallest()now returns the correct n values when keep != ‘all’ also when tied on the first columns (GH 22752)
- Constructing a DataFrame with an index argument that wasn’t already an instance of - Indexwas broken (GH 22227).
- Bug in - DataFrameprevented list subclasses to be used to construction (GH 21226)
- Bug in - DataFrame.unstack()and- DataFrame.pivot_table()returning a misleading error message when the resulting DataFrame has more elements than int32 can handle. Now, the error message is improved, pointing towards the actual problem (GH 20601)
- Bug in - DataFrame.unstack()where a- ValueErrorwas raised when unstacking timezone aware values (GH 18338)
- Bug in - DataFrame.stack()where timezone aware values were converted to timezone naive values (GH 19420)
- Bug in - merge_asof()where a- TypeErrorwas raised when- by_colwere timezone aware values (GH 21184)
- Bug showing an incorrect shape when throwing error during - DataFrameconstruction. (GH 20742)
Sparse#
- Updating a boolean, datetime, or timedelta column to be Sparse now works (GH 22367) 
- Bug in - Series.to_sparse()with Series already holding sparse data not constructing properly (GH 22389)
- Providing a - sparse_indexto the SparseArray constructor no longer defaults the na-value to- np.nanfor all dtypes. The correct na_value for- data.dtypeis now used.
- Bug in - SparseArray.nbytesunder-reporting its memory usage by not including the size of its sparse index.
- Improved performance of - Series.shift()for non-NA- fill_value, as values are no longer converted to a dense array.
- Bug in - DataFrame.groupbynot including- fill_valuein the groups for non-NA- fill_valuewhen grouping by a sparse column (GH 5078)
- Bug in unary inversion operator ( - ~) on a- SparseSerieswith boolean values. The performance of this has also been improved (GH 22835)
- Bug in - SparseArary.unique()not returning the unique values (GH 19595)
- Bug in - SparseArray.nonzero()and- SparseDataFrame.dropna()returning shifted/incorrect results (GH 21172)
- Bug in - DataFrame.apply()where dtypes would lose sparseness (GH 23744)
- Bug in - concat()when concatenating a list of- Serieswith all-sparse values changing the- fill_valueand converting to a dense Series (GH 24371)
Style#
- background_gradient()now takes a- text_color_thresholdparameter to automatically lighten the text color based on the luminance of the background color. This improves readability with dark background colors without the need to limit the background colormap range. (GH 21258)
- background_gradient()now also supports tablewise application (in addition to rowwise and columnwise) with- axis=None(GH 15204)
- bar()now also supports tablewise application (in addition to rowwise and columnwise) with- axis=Noneand setting clipping range with- vminand- vmax(GH 21548 and GH 21526).- NaNvalues are also handled properly.
Build changes#
- Building pandas for development now requires - cython >= 0.28.2(GH 21688)
- Testing pandas now requires - hypothesis>=3.58. You can find the Hypothesis docs here, and a pandas-specific introduction in the contributing guide. (GH 22280)
- Building pandas on macOS now targets minimum macOS 10.9 if run on macOS 10.9 or above (GH 23424) 
Other#
- Bug where C variables were declared with external linkage causing import errors if certain other C libraries were imported before pandas. (GH 24113) 
Contributors#
A total of 337 people contributed patches to this release. People with a “+” by their names contributed a patch for the first time.
- AJ Dyka + 
- AJ Pryor, Ph.D + 
- Aaron Critchley 
- Adam Hooper 
- Adam J. Stewart 
- Adam Kim 
- Adam Klimont + 
- Addison Lynch + 
- Alan Hogue + 
- Alex Radu + 
- Alex Rychyk 
- Alex Strick van Linschoten + 
- Alex Volkov + 
- Alexander Buchkovsky 
- Alexander Hess + 
- Alexander Ponomaroff + 
- Allison Browne + 
- Aly Sivji 
- Andrew 
- Andrew Gross + 
- Andrew Spott + 
- Andy + 
- Aniket uttam + 
- Anjali2019 + 
- Anjana S + 
- Antti Kaihola + 
- Anudeep Tubati + 
- Arjun Sharma + 
- Armin Varshokar 
- Artem Bogachev 
- ArtinSarraf + 
- Barry Fitzgerald + 
- Bart Aelterman + 
- Ben James + 
- Ben Nelson + 
- Benjamin Grove + 
- Benjamin Rowell + 
- Benoit Paquet + 
- Boris Lau + 
- Brett Naul 
- Brian Choi + 
- C.A.M. Gerlach + 
- Carl Johan + 
- Chalmer Lowe 
- Chang She 
- Charles David + 
- Cheuk Ting Ho 
- Chris 
- Chris Roberts + 
- Christopher Whelan 
- Chu Qing Hao + 
- Da Cheezy Mobsta + 
- Damini Satya 
- Daniel Himmelstein 
- Daniel Saxton + 
- Darcy Meyer + 
- DataOmbudsman 
- David Arcos 
- David Krych 
- Dean Langsam + 
- Diego Argueta + 
- Diego Torres + 
- Dobatymo + 
- Doug Latornell + 
- Dr. Irv 
- Dylan Dmitri Gray + 
- Eric Boxer + 
- Eric Chea 
- Erik + 
- Erik Nilsson + 
- Fabian Haase + 
- Fabian Retkowski 
- Fabien Aulaire + 
- Fakabbir Amin + 
- Fei Phoon + 
- Fernando Margueirat + 
- Florian Müller + 
- Fábio Rosado + 
- Gabe Fernando 
- Gabriel Reid + 
- Giftlin Rajaiah 
- Gioia Ballin + 
- Gjelt 
- Gosuke Shibahara + 
- Graham Inggs 
- Guillaume Gay 
- Guillaume Lemaitre + 
- Hannah Ferchland 
- Haochen Wu 
- Hubert + 
- HubertKl + 
- HyunTruth + 
- Iain Barr 
- Ignacio Vergara Kausel + 
- Irv Lustig + 
- IsvenC + 
- Jacopo Rota 
- Jakob Jarmar + 
- James Bourbeau + 
- James Myatt + 
- James Winegar + 
- Jan Rudolph 
- Jared Groves + 
- Jason Kiley + 
- Javad Noorbakhsh + 
- Jay Offerdahl + 
- Jeff Reback 
- Jeongmin Yu + 
- Jeremy Schendel 
- Jerod Estapa + 
- Jesper Dramsch + 
- Jim Jeon + 
- Joe Jevnik 
- Joel Nothman 
- Joel Ostblom + 
- Jordi Contestí 
- Jorge López Fueyo + 
- Joris Van den Bossche 
- Jose Quinones + 
- Jose Rivera-Rubio + 
- Josh 
- Jun + 
- Justin Zheng + 
- Kaiqi Dong + 
- Kalyan Gokhale 
- Kang Yoosam + 
- Karl Dunkle Werner + 
- Karmanya Aggarwal + 
- Kevin Markham + 
- Kevin Sheppard 
- Kimi Li + 
- Koustav Samaddar + 
- Krishna + 
- Kristian Holsheimer + 
- Ksenia Gueletina + 
- Kyle Prestel + 
- LJ + 
- LeakedMemory + 
- Li Jin + 
- Licht Takeuchi 
- Luca Donini + 
- Luciano Viola + 
- Mak Sze Chun + 
- Marc Garcia 
- Marius Potgieter + 
- Mark Sikora + 
- Markus Meier + 
- Marlene Silva Marchena + 
- Martin Babka + 
- MatanCohe + 
- Mateusz Woś + 
- Mathew Topper + 
- Matt Boggess + 
- Matt Cooper + 
- Matt Williams + 
- Matthew Gilbert 
- Matthew Roeschke 
- Max Kanter 
- Michael Odintsov 
- Michael Silverstein + 
- Michael-J-Ward + 
- Mickaël Schoentgen + 
- Miguel Sánchez de León Peque + 
- Ming Li 
- Mitar 
- Mitch Negus 
- Monson Shao + 
- Moonsoo Kim + 
- Mortada Mehyar 
- Myles Braithwaite 
- Nehil Jain + 
- Nicholas Musolino + 
- Nicolas Dickreuter + 
- Nikhil Kumar Mengani + 
- Nikoleta Glynatsi + 
- Ondrej Kokes 
- Pablo Ambrosio + 
- Pamela Wu + 
- Parfait G + 
- Patrick Park + 
- Paul 
- Paul Ganssle 
- Paul Reidy 
- Paul van Mulbregt + 
- Phillip Cloud 
- Pietro Battiston 
- Piyush Aggarwal + 
- Prabakaran Kumaresshan + 
- Pulkit Maloo 
- Pyry Kovanen 
- Rajib Mitra + 
- Redonnet Louis + 
- Rhys Parry + 
- Rick + 
- Robin 
- Roei.r + 
- RomainSa + 
- Roman Imankulov + 
- Roman Yurchak + 
- Ruijing Li + 
- Ryan + 
- Ryan Nazareth + 
- Rüdiger Busche + 
- SEUNG HOON, SHIN + 
- Sandrine Pataut + 
- Sangwoong Yoon 
- Santosh Kumar + 
- Saurav Chakravorty + 
- Scott McAllister + 
- Sean Chan + 
- Shadi Akiki + 
- Shengpu Tang + 
- Shirish Kadam + 
- Simon Hawkins + 
- Simon Riddell + 
- Simone Basso 
- Sinhrks 
- Soyoun(Rose) Kim + 
- Srinivas Reddy Thatiparthy (శ్రీనివాస్ రెడ్డి తాటిపర్తి) + 
- Stefaan Lippens + 
- Stefano Cianciulli 
- Stefano Miccoli + 
- Stephen Childs 
- Stephen Pascoe 
- Steve Baker + 
- Steve Cook + 
- Steve Dower + 
- Stéphan Taljaard + 
- Sumin Byeon + 
- Sören + 
- Tamas Nagy + 
- Tanya Jain + 
- Tarbo Fukazawa 
- Thein Oo + 
- Thiago Cordeiro da Fonseca + 
- Thierry Moisan 
- Thiviyan Thanapalasingam + 
- Thomas Lentali + 
- Tim D. Smith + 
- Tim Swast 
- Tom Augspurger 
- Tomasz Kluczkowski + 
- Tony Tao + 
- Triple0 + 
- Troels Nielsen + 
- Tuhin Mahmud + 
- Tyler Reddy + 
- Uddeshya Singh 
- Uwe L. Korn + 
- Vadym Barda + 
- Varad Gunjal + 
- Victor Maryama + 
- Victor Villas 
- Vincent La 
- Vitória Helena + 
- Vu Le 
- Vyom Jain + 
- Weiwen Gu + 
- Wenhuan 
- Wes Turner 
- Wil Tan + 
- William Ayd 
- Yeojin Kim + 
- Yitzhak Andrade + 
- Yuecheng Wu + 
- Yuliya Dovzhenko + 
- Yury Bayda + 
- Zac Hatfield-Dodds + 
- aberres + 
- aeltanawy + 
- ailchau + 
- alimcmaster1 
- alphaCTzo7G + 
- amphy + 
- araraonline + 
- azure-pipelines[bot] + 
- benarthur91 + 
- bk521234 + 
- cgangwar11 + 
- chris-b1 
- cxl923cc + 
- dahlbaek + 
- dannyhyunkim + 
- darke-spirits + 
- david-liu-brattle-1 
- davidmvalente + 
- deflatSOCO 
- doosik_bae + 
- dylanchase + 
- eduardo naufel schettino + 
- euri10 + 
- evangelineliu + 
- fengyqf + 
- fjdiod 
- fl4p + 
- fleimgruber + 
- gfyoung 
- h-vetinari 
- harisbal + 
- henriqueribeiro + 
- himanshu awasthi 
- hongshaoyang + 
- igorfassen + 
- jalazbe + 
- jbrockmendel 
- jh-wu + 
- justinchan23 + 
- louispotok 
- marcosrullan + 
- miker985 
- nicolab100 + 
- nprad 
- nsuresh + 
- ottiP 
- pajachiet + 
- raguiar2 + 
- ratijas + 
- realead + 
- robbuckley + 
- saurav2608 + 
- sideeye + 
- ssikdar1 
- svenharris + 
- syutbai + 
- testvinder + 
- thatneat 
- tmnhat2001 
- tomascassidy + 
- tomneep 
- topper-123 
- vkk800 + 
- winlu + 
- ym-pett + 
- yrhooke + 
- ywpark1 + 
- zertrin 
- zhezherun +