Warning
The 0.24.x series of releases will be the last to support Python 2. Future feature releases will support Python 3 only. See Dropping Python 2.7 for more details.
This is a major release from 0.23.4 and includes a number of API changes, new features, enhancements, and performance improvements along with a large number of bug fixes.
Highlights include:
Optional Integer NA Support
New APIs for accessing the array backing a Series or Index
A new top-level method for creating arrays
Store Interval and Period data in a Series or DataFrame
Support for joining on two MultiIndexes
Check the API Changes and deprecations before updating.
These are the changes in pandas 0.24.0. See Release notes for a full changelog including other versions of pandas.
pandas has gained the ability to hold integer dtypes with missing values. This long requested feature is enabled through the use of extension types.
Note
IntegerArray is currently experimental. Its API or implementation may change without warning.
We can construct a Series with the specified dtype. The dtype string Int64 is a pandas ExtensionDtype. Specifying a list or array using the traditional missing value marker of np.nan will infer to integer dtype. The display of the Series will also use the NaN to indicate missing values in string outputs. (GH20700, GH20747, GH22441, GH21789, GH22346)
Series
Int64
ExtensionDtype
np.nan
NaN
In [1]: s = pd.Series([1, 2, np.nan], dtype='Int64') In [2]: s Out[2]: 0 1 1 2 2 <NA> Length: 3, dtype: Int64
Operations on these dtypes will propagate NaN as other pandas operations.
# arithmetic In [3]: s + 1 Out[3]: 0 2 1 3 2 <NA> Length: 3, dtype: Int64 # comparison In [4]: s == 1 Out[4]: 0 True 1 False 2 <NA> Length: 3, dtype: boolean # indexing In [5]: s.iloc[1:3] Out[5]: 1 2 2 <NA> Length: 2, dtype: Int64 # operate with other dtypes In [6]: s + s.iloc[1:3].astype('Int8') Out[6]: 0 <NA> 1 4 2 <NA> Length: 3, dtype: Int64 # coerce when needed In [7]: s + 0.01 Out[7]: 0 1.01 1 2.01 2 <NA> Length: 3, dtype: Float64
These dtypes can operate as part of a DataFrame.
DataFrame
In [8]: df = pd.DataFrame({'A': s, 'B': [1, 1, 3], 'C': list('aab')}) In [9]: df Out[9]: A B C 0 1 1 a 1 2 1 a 2 <NA> 3 b [3 rows x 3 columns] In [10]: df.dtypes Out[10]: A Int64 B int64 C object Length: 3, dtype: object
These dtypes can be merged, reshaped, and casted.
In [11]: pd.concat([df[['A']], df[['B', 'C']]], axis=1).dtypes Out[11]: A Int64 B int64 C object Length: 3, dtype: object In [12]: df['A'].astype(float) Out[12]: 0 1.0 1 2.0 2 NaN Name: A, Length: 3, dtype: float64
Reduction and groupby operations such as sum work.
sum
In [13]: df.sum() Out[13]: A 3 B 5 C aab Length: 3, dtype: object In [14]: df.groupby('B').A.sum() Out[14]: B 1 3 3 0 Name: A, Length: 2, dtype: Int64
The Integer NA support currently uses the capitalized dtype version, e.g. Int8 as compared to the traditional int8. This may be changed at a future date.
Int8
int8
See Nullable integer data type for more.
Series.array and Index.array have been added for extracting the array backing a Series or Index. (GH19954, GH23623)
Series.array
Index.array
Index
In [15]: idx = pd.period_range('2000', periods=4) In [16]: idx.array Out[16]: <PeriodArray> ['2000-01-01', '2000-01-02', '2000-01-03', '2000-01-04'] Length: 4, dtype: period[D] In [17]: pd.Series(idx).array Out[17]: <PeriodArray> ['2000-01-01', '2000-01-02', '2000-01-03', '2000-01-04'] Length: 4, dtype: period[D]
Historically, this would have been done with series.values, but with .values it was unclear whether the returned value would be the actual array, some transformation of it, or one of pandas custom arrays (like Categorical). For example, with PeriodIndex, .values generates a new ndarray of period objects each time.
series.values
.values
Categorical
PeriodIndex
In [18]: idx.values Out[18]: array([Period('2000-01-01', 'D'), Period('2000-01-02', 'D'), Period('2000-01-03', 'D'), Period('2000-01-04', 'D')], dtype=object) In [19]: id(idx.values) Out[19]: 140364264707216 In [20]: id(idx.values) Out[20]: 140364930649744
If you need an actual NumPy array, use Series.to_numpy() or Index.to_numpy().
Series.to_numpy()
Index.to_numpy()
In [21]: idx.to_numpy() Out[21]: array([Period('2000-01-01', 'D'), Period('2000-01-02', 'D'), Period('2000-01-03', 'D'), Period('2000-01-04', 'D')], dtype=object) In [22]: pd.Series(idx).to_numpy() Out[22]: array([Period('2000-01-01', 'D'), Period('2000-01-02', 'D'), Period('2000-01-03', 'D'), Period('2000-01-04', 'D')], dtype=object)
For Series and Indexes backed by normal NumPy arrays, Series.array will return a new arrays.PandasArray, which is a thin (no-copy) wrapper around a numpy.ndarray. PandasArray isn’t especially useful on its own, but it does provide the same interface as any extension array defined in pandas or by a third-party library.
arrays.PandasArray
numpy.ndarray
PandasArray
In [23]: ser = pd.Series([1, 2, 3]) In [24]: ser.array Out[24]: <PandasArray> [1, 2, 3] Length: 3, dtype: int64 In [25]: ser.to_numpy() Out[25]: array([1, 2, 3])
We haven’t removed or deprecated Series.values or DataFrame.values, but we highly recommend and using .array or .to_numpy() instead.
Series.values
DataFrame.values
.array
.to_numpy()
See Dtypes and Attributes and Underlying Data for more.
pandas.array
A new top-level method array() has been added for creating 1-dimensional arrays (GH22860). This can be used to create any extension array, including extension arrays registered by 3rd party libraries. See the dtypes docs for more on extension arrays.
array()
In [26]: pd.array([1, 2, np.nan], dtype='Int64') Out[26]: <IntegerArray> [1, 2, <NA>] Length: 3, dtype: Int64 In [27]: pd.array(['a', 'b', 'c'], dtype='category') Out[27]: ['a', 'b', 'c'] Categories (3, object): ['a', 'b', 'c']
Passing data for which there isn’t dedicated extension type (e.g. float, integer, etc.) will return a new arrays.PandasArray, which is just a thin (no-copy) wrapper around a numpy.ndarray that satisfies the pandas extension array interface.
In [28]: pd.array([1, 2, 3]) Out[28]: <IntegerArray> [1, 2, 3] Length: 3, dtype: Int64
On their own, a PandasArray isn’t a very useful object. But if you need write low-level code that works generically for any ExtensionArray, PandasArray satisfies that need.
ExtensionArray
Notice that by default, if no dtype is specified, the dtype of the returned array is inferred from the data. In particular, note that the first example of [1, 2, np.nan] would have returned a floating-point array, since NaN is a float.
dtype
[1, 2, np.nan]
In [29]: pd.array([1, 2, np.nan]) Out[29]: <IntegerArray> [1, 2, <NA>] Length: 3, dtype: Int64
Interval and Period data may now be stored in a Series or DataFrame, in addition to an IntervalIndex and PeriodIndex like previously (GH19453, GH22862).
Interval
Period
IntervalIndex
In [30]: ser = pd.Series(pd.interval_range(0, 5)) In [31]: ser Out[31]: 0 (0, 1] 1 (1, 2] 2 (2, 3] 3 (3, 4] 4 (4, 5] Length: 5, dtype: interval In [32]: ser.dtype Out[32]: interval[int64]
For periods:
In [33]: pser = pd.Series(pd.period_range("2000", freq="D", periods=5)) In [34]: pser Out[34]: 0 2000-01-01 1 2000-01-02 2 2000-01-03 3 2000-01-04 4 2000-01-05 Length: 5, dtype: period[D] In [35]: pser.dtype Out[35]: period[D]
Previously, these would be cast to a NumPy array with object dtype. In general, this should result in better performance when storing an array of intervals or periods in a Series or column of a DataFrame.
Use Series.array to extract the underlying array of intervals or periods from the Series:
In [36]: ser.array Out[36]: <IntervalArray> [(0, 1], (1, 2], (2, 3], (3, 4], (4, 5]] Length: 5, closed: right, dtype: interval[int64] In [37]: pser.array Out[37]: <PeriodArray> ['2000-01-01', '2000-01-02', '2000-01-03', '2000-01-04', '2000-01-05'] Length: 5, dtype: period[D]
These return an instance of arrays.IntervalArray or arrays.PeriodArray, the new extension arrays that back interval and period data.
arrays.IntervalArray
arrays.PeriodArray
For backwards compatibility, Series.values continues to return a NumPy array of objects for Interval and Period data. We recommend using Series.array when you need the array of data stored in the Series, and Series.to_numpy() when you know you need a NumPy array.
DataFrame.merge() and DataFrame.join() can now be used to join multi-indexed Dataframe instances on the overlapping index levels (GH6360)
DataFrame.merge()
DataFrame.join()
Dataframe
See the Merge, join, and concatenate documentation section.
In [38]: index_left = pd.MultiIndex.from_tuples([('K0', 'X0'), ('K0', 'X1'), ....: ('K1', 'X2')], ....: names=['key', 'X']) ....: In [39]: left = pd.DataFrame({'A': ['A0', 'A1', 'A2'], ....: 'B': ['B0', 'B1', 'B2']}, index=index_left) ....: In [40]: index_right = pd.MultiIndex.from_tuples([('K0', 'Y0'), ('K1', 'Y1'), ....: ('K2', 'Y2'), ('K2', 'Y3')], ....: names=['key', 'Y']) ....: In [41]: right = pd.DataFrame({'C': ['C0', 'C1', 'C2', 'C3'], ....: 'D': ['D0', 'D1', 'D2', 'D3']}, index=index_right) ....: In [42]: left.join(right) Out[42]: A B C D key X Y K0 X0 Y0 A0 B0 C0 D0 X1 Y0 A1 B1 C0 D0 K1 X2 Y1 A2 B2 C1 D1 [3 rows x 4 columns]
For earlier versions this can be done using the following.
In [43]: pd.merge(left.reset_index(), right.reset_index(), ....: on=['key'], how='inner').set_index(['key', 'X', 'Y']) ....: Out[43]: A B C D key X Y K0 X0 Y0 A0 B0 C0 D0 X1 Y0 A1 B1 C0 D0 K1 X2 Y1 A2 B2 C1 D1 [3 rows x 4 columns]
read_html
read_html() previously ignored colspan and rowspan attributes. Now it understands them, treating them as sequences of cells with the same value. (GH17054)
read_html()
colspan
rowspan
In [44]: result = pd.read_html(""" ....: <table> ....: <thead> ....: <tr> ....: <th>A</th><th>B</th><th>C</th> ....: </tr> ....: </thead> ....: <tbody> ....: <tr> ....: <td colspan="2">1</td><td>2</td> ....: </tr> ....: </tbody> ....: </table>""") ....:
Previous behavior:
In [13]: result Out [13]: [ A B C 0 1 2 NaN]
New behavior:
In [45]: result Out[45]: [ A B C 0 1 1 2 [1 rows x 3 columns]]
Styler.pipe()
The Styler class has gained a pipe() method. This provides a convenient way to apply users’ predefined styling functions, and can help reduce “boilerplate” when using DataFrame styling functionality repeatedly within a notebook. (GH23229)
Styler
pipe()
In [46]: df = pd.DataFrame({'N': [1250, 1500, 1750], 'X': [0.25, 0.35, 0.50]}) In [47]: def format_and_align(styler): ....: return (styler.format({'N': '{:,}', 'X': '{:.1%}'}) ....: .set_properties(**{'text-align': 'right'})) ....: In [48]: df.style.pipe(format_and_align).set_caption('Summary of results.') Out[48]: <pandas.io.formats.style.Styler at 0x7fa8953118b0>
Similar methods already exist for other classes in pandas, including DataFrame.pipe(), GroupBy.pipe(), and Resampler.pipe().
DataFrame.pipe()
GroupBy.pipe()
Resampler.pipe()
DataFrame.rename_axis() now supports index and columns arguments and Series.rename_axis() supports index argument (GH19978).
DataFrame.rename_axis()
index
columns
Series.rename_axis()
This change allows a dictionary to be passed so that some of the names of a MultiIndex can be changed.
MultiIndex
Example:
In [49]: mi = pd.MultiIndex.from_product([list('AB'), list('CD'), list('EF')], ....: names=['AB', 'CD', 'EF']) ....: In [50]: df = pd.DataFrame(list(range(len(mi))), index=mi, columns=['N']) In [51]: df Out[51]: N AB CD EF A C E 0 F 1 D E 2 F 3 B C E 4 F 5 D E 6 F 7 [8 rows x 1 columns] In [52]: df.rename_axis(index={'CD': 'New'}) Out[52]: N AB New EF A C E 0 F 1 D E 2 F 3 B C E 4 F 5 D E 6 F 7 [8 rows x 1 columns]
See the Advanced documentation on renaming for more details.
merge() now directly allows merge between objects of type DataFrame and named Series, without the need to convert the Series object into a DataFrame beforehand (GH21220)
merge()
ExcelWriter now accepts mode as a keyword argument, enabling append to existing workbooks when using the openpyxl engine (GH3441)
ExcelWriter
mode
openpyxl
FrozenList has gained the .union() and .difference() methods. This functionality greatly simplifies groupby’s that rely on explicitly excluding certain columns. See Splitting an object into groups for more information (GH15475, GH15506).
FrozenList
.union()
.difference()
DataFrame.to_parquet() now accepts index as an argument, allowing the user to override the engine’s default behavior to include or omit the dataframe’s indexes from the resulting Parquet file. (GH20768)
DataFrame.to_parquet()
read_feather() now accepts columns as an argument, allowing the user to specify which columns should be read. (GH24025)
read_feather()
DataFrame.corr() and Series.corr() now accept a callable for generic calculation methods of correlation, e.g. histogram intersection (GH22684)
DataFrame.corr()
Series.corr()
DataFrame.to_string() now accepts decimal as an argument, allowing the user to specify which decimal separator should be used in the output. (GH23614)
DataFrame.to_string()
decimal
DataFrame.to_html() now accepts render_links as an argument, allowing the user to generate HTML with links to any URLs that appear in the DataFrame. See the section on writing HTML in the IO docs for example usage. (GH2679)
DataFrame.to_html()
render_links
pandas.read_csv() now supports pandas extension types as an argument to dtype, allowing the user to use pandas extension types when reading CSVs. (GH23228)
pandas.read_csv()
The shift() method now accepts fill_value as an argument, allowing the user to specify a value which will be used instead of NA/NaT in the empty periods. (GH15486)
shift()
fill_value
to_datetime() now supports the %Z and %z directive when passed into format (GH13486)
to_datetime()
%Z
%z
format
Series.mode() and DataFrame.mode() now support the dropna parameter which can be used to specify whether NaN/NaT values should be considered (GH17534)
Series.mode()
DataFrame.mode()
dropna
NaT
DataFrame.to_csv() and Series.to_csv() now support the compression keyword when a file handle is passed. (GH21227)
DataFrame.to_csv()
Series.to_csv()
compression
Index.droplevel() is now implemented also for flat indexes, for compatibility with MultiIndex (GH21115)
Index.droplevel()
Series.droplevel() and DataFrame.droplevel() are now implemented (GH20342)
Series.droplevel()
DataFrame.droplevel()
Added support for reading from/writing to Google Cloud Storage via the gcsfs library (GH19454, GH23094)
gcsfs
DataFrame.to_gbq() and read_gbq() signature and documentation updated to reflect changes from the pandas-gbq library version 0.8.0. Adds a credentials argument, which enables the use of any kind of google-auth credentials. (GH21627, GH22557, GH23662)
DataFrame.to_gbq()
read_gbq()
credentials
New method HDFStore.walk() will recursively walk the group hierarchy of an HDF5 file (GH10932)
HDFStore.walk()
read_html() copies cell data across colspan and rowspan, and it treats all-th table rows as headers if header kwarg is not given and there is no thead (GH17054)
th
header
thead
Series.nlargest(), Series.nsmallest(), DataFrame.nlargest(), and DataFrame.nsmallest() now accept the value "all" for the keep argument. This keeps all ties for the nth largest/smallest value (GH16818)
Series.nlargest()
Series.nsmallest()
DataFrame.nlargest()
DataFrame.nsmallest()
"all"
keep
IntervalIndex has gained the set_closed() method to change the existing closed value (GH21670)
set_closed()
closed
to_csv(), to_csv(), to_json(), and to_json() now support compression='infer' to infer compression based on filename extension (GH15008). The default compression for to_csv, to_json, and to_pickle methods has been updated to 'infer' (GH22004).
to_csv()
to_json()
compression='infer'
to_csv
to_json
to_pickle
'infer'
DataFrame.to_sql() now supports writing TIMESTAMP WITH TIME ZONE types for supported databases. For databases that don’t support timezones, datetime data will be stored as timezone unaware local timestamps. See the Datetime data types for implications (GH9086).
DataFrame.to_sql()
TIMESTAMP WITH TIME ZONE
to_timedelta() now supports iso-formatted timedelta strings (GH21877)
to_timedelta()
Series and DataFrame now support Iterable objects in the constructor (GH2193)
Iterable
DatetimeIndex has gained the DatetimeIndex.timetz attribute. This returns the local time with timezone information. (GH21358)
DatetimeIndex
DatetimeIndex.timetz
round(), ceil(), and floor() for DatetimeIndex and Timestamp now support an ambiguous argument for handling datetimes that are rounded to ambiguous times (GH18946) and a nonexistent argument for handling datetimes that are rounded to nonexistent times. See Nonexistent times when localizing (GH22647)
round()
ceil()
floor()
Timestamp
ambiguous
nonexistent
The result of resample() is now iterable similar to groupby() (GH15314).
resample()
groupby()
Series.resample() and DataFrame.resample() have gained the pandas.core.resample.Resampler.quantile() (GH15023).
Series.resample()
DataFrame.resample()
pandas.core.resample.Resampler.quantile()
DataFrame.resample() and Series.resample() with a PeriodIndex will now respect the base argument in the same fashion as with a DatetimeIndex. (GH23882)
base
pandas.api.types.is_list_like() has gained a keyword allow_sets which is True by default; if False, all instances of set will not be considered “list-like” anymore (GH23061)
pandas.api.types.is_list_like()
allow_sets
True
False
set
Index.to_frame() now supports overriding column name(s) (GH22580).
Index.to_frame()
Categorical.from_codes() now can take a dtype parameter as an alternative to passing categories and ordered (GH24398).
Categorical.from_codes()
categories
ordered
New attribute __git_version__ will return git commit sha of current build (GH21295).
__git_version__
Compatibility with Matplotlib 3.0 (GH22790).
Added Interval.overlaps(), arrays.IntervalArray.overlaps(), and IntervalIndex.overlaps() for determining overlaps between interval-like objects (GH21998)
Interval.overlaps()
arrays.IntervalArray.overlaps()
IntervalIndex.overlaps()
read_fwf() now accepts keyword infer_nrows (GH15138).
read_fwf()
infer_nrows
to_parquet() now supports writing a DataFrame as a directory of parquet files partitioned by a subset of the columns when engine = 'pyarrow' (GH23283)
to_parquet()
engine = 'pyarrow'
Timestamp.tz_localize(), DatetimeIndex.tz_localize(), and Series.tz_localize() have gained the nonexistent argument for alternative handling of nonexistent times. See Nonexistent times when localizing (GH8917, GH24466)
Timestamp.tz_localize()
DatetimeIndex.tz_localize()
Series.tz_localize()
Index.difference(), Index.intersection(), Index.union(), and Index.symmetric_difference() now have an optional sort parameter to control whether the results should be sorted if possible (GH17839, GH24471)
Index.difference()
Index.intersection()
Index.union()
Index.symmetric_difference()
sort
read_excel() now accepts usecols as a list of column names or callable (GH18273)
read_excel()
usecols
MultiIndex.to_flat_index() has been added to flatten multiple levels into a single-level Index object.
MultiIndex.to_flat_index()
DataFrame.to_stata() and pandas.io.stata.StataWriter117 can write mixed string columns to Stata strl format (GH23633)
DataFrame.to_stata()
pandas.io.stata.StataWriter117
DataFrame.between_time() and DataFrame.at_time() have gained the axis parameter (GH8839)
DataFrame.between_time()
DataFrame.at_time()
axis
DataFrame.to_records() now accepts index_dtypes and column_dtypes parameters to allow different data types in stored column and index records (GH18146)
DataFrame.to_records()
index_dtypes
column_dtypes
IntervalIndex has gained the is_overlapping attribute to indicate if the IntervalIndex contains any overlapping intervals (GH23309)
is_overlapping
pandas.DataFrame.to_sql() has gained the method argument to control SQL insertion clause. See the insertion method section in the documentation. (GH8953)
pandas.DataFrame.to_sql()
method
DataFrame.corrwith() now supports Spearman’s rank correlation, Kendall’s tau as well as callable correlation methods. (GH21925)
DataFrame.corrwith()
DataFrame.to_json(), DataFrame.to_csv(), DataFrame.to_pickle(), and other export methods now support tilde(~) in path argument. (GH23473)
DataFrame.to_json()
DataFrame.to_pickle()
pandas 0.24.0 includes a number of API breaking changes.
We have updated our minimum supported versions of dependencies (GH21242, GH18742, GH23774, GH24767). If installed, we now require:
Package
Minimum Version
Required
numpy
1.12.0
X
bottleneck
1.2.0
fastparquet
0.2.1
matplotlib
2.0.0
numexpr
2.6.1
pandas-gbq
0.8.0
pyarrow
0.9.0
pytables
3.4.2
scipy
0.18.1
xlrd
1.0.0
pytest (dev)
3.6
Additionally we no longer depend on feather-format for feather based storage and replaced it with references to pyarrow (GH21639 and GH23053).
feather-format
os.linesep
line_terminator
DataFrame.to_csv
DataFrame.to_csv() now uses os.linesep() rather than '\n' for the default line terminator (GH20353). This change only affects when running on Windows, where '\r\n' was used for line terminator even when '\n' was passed in line_terminator.
os.linesep()
'\n'
'\r\n'
Previous behavior on Windows:
In [1]: data = pd.DataFrame({"string_with_lf": ["a\nbc"], ...: "string_with_crlf": ["a\r\nbc"]}) In [2]: # When passing file PATH to to_csv, ...: # line_terminator does not work, and csv is saved with '\r\n'. ...: # Also, this converts all '\n's in the data to '\r\n'. ...: data.to_csv("test.csv", index=False, line_terminator='\n') In [3]: with open("test.csv", mode='rb') as f: ...: print(f.read()) Out[3]: b'string_with_lf,string_with_crlf\r\n"a\r\nbc","a\r\r\nbc"\r\n' In [4]: # When passing file OBJECT with newline option to ...: # to_csv, line_terminator works. ...: with open("test2.csv", mode='w', newline='\n') as f: ...: data.to_csv(f, index=False, line_terminator='\n') In [5]: with open("test2.csv", mode='rb') as f: ...: print(f.read()) Out[5]: b'string_with_lf,string_with_crlf\n"a\nbc","a\r\nbc"\n'
New behavior on Windows:
Passing line_terminator explicitly, set the line terminator to that character.
line terminator
In [1]: data = pd.DataFrame({"string_with_lf": ["a\nbc"], ...: "string_with_crlf": ["a\r\nbc"]}) In [2]: data.to_csv("test.csv", index=False, line_terminator='\n') In [3]: with open("test.csv", mode='rb') as f: ...: print(f.read()) Out[3]: b'string_with_lf,string_with_crlf\n"a\nbc","a\r\nbc"\n'
On Windows, the value of os.linesep is '\r\n', so if line_terminator is not set, '\r\n' is used for line terminator.
In [1]: data = pd.DataFrame({"string_with_lf": ["a\nbc"], ...: "string_with_crlf": ["a\r\nbc"]}) In [2]: data.to_csv("test.csv", index=False) In [3]: with open("test.csv", mode='rb') as f: ...: print(f.read()) Out[3]: b'string_with_lf,string_with_crlf\r\n"a\nbc","a\r\nbc"\r\n'
For file objects, specifying newline is not sufficient to set the line terminator. You must pass in the line_terminator explicitly, even in this case.
newline
In [1]: data = pd.DataFrame({"string_with_lf": ["a\nbc"], ...: "string_with_crlf": ["a\r\nbc"]}) In [2]: with open("test2.csv", mode='w', newline='\n') as f: ...: data.to_csv(f, index=False) In [3]: with open("test2.csv", mode='rb') as f: ...: print(f.read()) Out[3]: b'string_with_lf,string_with_crlf\r\n"a\nbc","a\r\nbc"\r\n'
np.NaN
There was bug in read_excel() and read_csv() with the Python engine, where missing values turned to 'nan' with dtype=str and na_filter=True. Now, these missing values are converted to the string missing indicator, np.nan. (GH20377)
read_csv()
'nan'
dtype=str
na_filter=True
In [5]: data = 'a,b,c\n1,,3\n4,5,6' In [6]: df = pd.read_csv(StringIO(data), engine='python', dtype=str, na_filter=True) In [7]: df.loc[0, 'b'] Out[7]: 'nan'
In [53]: data = 'a,b,c\n1,,3\n4,5,6' In [54]: df = pd.read_csv(StringIO(data), engine='python', dtype=str, na_filter=True) In [55]: df.loc[0, 'b'] Out[55]: nan
Notice how we now instead output np.nan itself instead of a stringified form of it.
Previously, parsing datetime strings with UTC offsets with to_datetime() or DatetimeIndex would automatically convert the datetime to UTC without timezone localization. This is inconsistent from parsing the same datetime string with Timestamp which would preserve the UTC offset in the tz attribute. Now, to_datetime() preserves the UTC offset in the tz attribute when all the datetime strings have the same UTC offset (GH17697, GH11736, GH22457)
tz
In [2]: pd.to_datetime("2015-11-18 15:30:00+05:30") Out[2]: Timestamp('2015-11-18 10:00:00') In [3]: pd.Timestamp("2015-11-18 15:30:00+05:30") Out[3]: Timestamp('2015-11-18 15:30:00+0530', tz='pytz.FixedOffset(330)') # Different UTC offsets would automatically convert the datetimes to UTC (without a UTC timezone) In [4]: pd.to_datetime(["2015-11-18 15:30:00+05:30", "2015-11-18 16:30:00+06:30"]) Out[4]: DatetimeIndex(['2015-11-18 10:00:00', '2015-11-18 10:00:00'], dtype='datetime64[ns]', freq=None)
In [56]: pd.to_datetime("2015-11-18 15:30:00+05:30") Out[56]: Timestamp('2015-11-18 15:30:00+0530', tz='pytz.FixedOffset(330)') In [57]: pd.Timestamp("2015-11-18 15:30:00+05:30") Out[57]: Timestamp('2015-11-18 15:30:00+0530', tz='pytz.FixedOffset(330)')
Parsing datetime strings with the same UTC offset will preserve the UTC offset in the tz
In [58]: pd.to_datetime(["2015-11-18 15:30:00+05:30"] * 2) Out[58]: DatetimeIndex(['2015-11-18 15:30:00+05:30', '2015-11-18 15:30:00+05:30'], dtype='datetime64[ns, pytz.FixedOffset(330)]', freq=None)
Parsing datetime strings with different UTC offsets will now create an Index of datetime.datetime objects with different UTC offsets
datetime.datetime
In [59]: idx = pd.to_datetime(["2015-11-18 15:30:00+05:30", ....: "2015-11-18 16:30:00+06:30"]) ....: In [60]: idx Out[60]: Index([2015-11-18 15:30:00+05:30, 2015-11-18 16:30:00+06:30], dtype='object') In [61]: idx[0] Out[61]: datetime.datetime(2015, 11, 18, 15, 30, tzinfo=tzoffset(None, 19800)) In [62]: idx[1] Out[62]: datetime.datetime(2015, 11, 18, 16, 30, tzinfo=tzoffset(None, 23400))
Passing utc=True will mimic the previous behavior but will correctly indicate that the dates have been converted to UTC
utc=True
In [63]: pd.to_datetime(["2015-11-18 15:30:00+05:30", ....: "2015-11-18 16:30:00+06:30"], utc=True) ....: Out[63]: DatetimeIndex(['2015-11-18 10:00:00+00:00', '2015-11-18 10:00:00+00:00'], dtype='datetime64[ns, UTC]', freq=None)
read_csv() no longer silently converts mixed-timezone columns to UTC (GH24987).
Previous behavior
>>> import io >>> content = """\ ... a ... 2000-01-01T00:00:00+05:00 ... 2000-01-01T00:00:00+06:00""" >>> df = pd.read_csv(io.StringIO(content), parse_dates=['a']) >>> df.a 0 1999-12-31 19:00:00 1 1999-12-31 18:00:00 Name: a, dtype: datetime64[ns]
New behavior
In [64]: import io In [65]: content = """\ ....: a ....: 2000-01-01T00:00:00+05:00 ....: 2000-01-01T00:00:00+06:00""" ....: In [66]: df = pd.read_csv(io.StringIO(content), parse_dates=['a']) In [67]: df.a Out[67]: 0 2000-01-01 00:00:00+05:00 1 2000-01-01 00:00:00+06:00 Name: a, Length: 2, dtype: object
As can be seen, the dtype is object; each value in the column is a string. To convert the strings to an array of datetimes, the date_parser argument
date_parser
In [68]: df = pd.read_csv(io.StringIO(content), parse_dates=['a'], ....: date_parser=lambda col: pd.to_datetime(col, utc=True)) ....: In [69]: df.a Out[69]: 0 1999-12-31 19:00:00+00:00 1 1999-12-31 18:00:00+00:00 Name: a, Length: 2, dtype: datetime64[ns, UTC]
See Parsing datetime strings with timezone offsets for more.
dt.end_time
to_timestamp(how='end')
The time values in Period and PeriodIndex objects are now set to ‘23:59:59.999999999’ when calling Series.dt.end_time, Period.end_time, PeriodIndex.end_time, Period.to_timestamp() with how='end', or PeriodIndex.to_timestamp() with how='end' (GH17157)
Series.dt.end_time
Period.end_time
PeriodIndex.end_time
Period.to_timestamp()
how='end'
PeriodIndex.to_timestamp()
In [2]: p = pd.Period('2017-01-01', 'D') In [3]: pi = pd.PeriodIndex([p]) In [4]: pd.Series(pi).dt.end_time[0] Out[4]: Timestamp(2017-01-01 00:00:00) In [5]: p.end_time Out[5]: Timestamp(2017-01-01 23:59:59.999999999)
Calling Series.dt.end_time will now result in a time of ‘23:59:59.999999999’ as is the case with Period.end_time, for example
In [70]: p = pd.Period('2017-01-01', 'D') In [71]: pi = pd.PeriodIndex([p]) In [72]: pd.Series(pi).dt.end_time[0] Out[72]: Timestamp('2017-01-01 23:59:59.999999999') In [73]: p.end_time Out[73]: Timestamp('2017-01-01 23:59:59.999999999')
The return type of Series.unique() for datetime with timezone values has changed from an numpy.ndarray of Timestamp objects to a arrays.DatetimeArray (GH24024).
Series.unique()
arrays.DatetimeArray
In [74]: ser = pd.Series([pd.Timestamp('2000', tz='UTC'), ....: pd.Timestamp('2000', tz='UTC')]) ....:
In [3]: ser.unique() Out[3]: array([Timestamp('2000-01-01 00:00:00+0000', tz='UTC')], dtype=object)
In [75]: ser.unique() Out[75]: <DatetimeArray> ['2000-01-01 00:00:00+00:00'] Length: 1, dtype: datetime64[ns, UTC]
SparseArray, the array backing SparseSeries and the columns in a SparseDataFrame, is now an extension array (GH21978, GH19056, GH22835). To conform to this interface and for consistency with the rest of pandas, some API breaking changes were made:
SparseArray
SparseSeries
SparseDataFrame
SparseArray is no longer a subclass of numpy.ndarray. To convert a SparseArray to a NumPy array, use numpy.asarray().
numpy.asarray()
SparseArray.dtype and SparseSeries.dtype are now instances of SparseDtype, rather than np.dtype. Access the underlying dtype with SparseDtype.subtype.
SparseArray.dtype
SparseSeries.dtype
SparseDtype
np.dtype
SparseDtype.subtype
numpy.asarray(sparse_array) now returns a dense array with all the values, not just the non-fill-value values (GH14167)
numpy.asarray(sparse_array)
SparseArray.take now matches the API of pandas.api.extensions.ExtensionArray.take() (GH19506):
SparseArray.take
pandas.api.extensions.ExtensionArray.take()
The default value of allow_fill has changed from False to True.
allow_fill
The out and mode parameters are now longer accepted (previously, this raised if they were specified).
out
Passing a scalar for indices is no longer allowed.
indices
The result of concat() with a mix of sparse and dense Series is a Series with sparse values, rather than a SparseSeries.
concat()
SparseDataFrame.combine and DataFrame.combine_first no longer supports combining a sparse column with a dense column while preserving the sparse subtype. The result will be an object-dtype SparseArray.
SparseDataFrame.combine
DataFrame.combine_first
Setting SparseArray.fill_value to a fill value with a different dtype is now allowed.
SparseArray.fill_value
DataFrame[column] is now a Series with sparse values, rather than a SparseSeries, when slicing a single column with sparse values (GH23559).
DataFrame[column]
The result of Series.where() is now a Series with sparse values, like with other extension arrays (GH24077)
Series.where()
Some new warnings are issued for operations that require or are likely to materialize a large dense array:
A errors.PerformanceWarning is issued when using fillna with a method, as a dense array is constructed to create the filled array. Filling with a value is the efficient way to fill a sparse array.
errors.PerformanceWarning
value
A errors.PerformanceWarning is now issued when concatenating sparse Series with differing fill values. The fill value from the first sparse array continues to be used.
In addition to these API breaking changes, many Performance Improvements and Bug Fixes have been made.
Finally, a Series.sparse accessor was added to provide sparse-specific methods like Series.sparse.from_coo().
Series.sparse
Series.sparse.from_coo()
In [76]: s = pd.Series([0, 0, 1, 1, 1], dtype='Sparse[int]') In [77]: s.sparse.density Out[77]: 0.6
get_dummies()
Previously, when sparse=True was passed to get_dummies(), the return value could be either a DataFrame or a SparseDataFrame, depending on whether all or a just a subset of the columns were dummy-encoded. Now, a DataFrame is always returned (GH24284).
sparse=True
The first get_dummies() returns a DataFrame because the column A is not dummy encoded. When just ["B", "C"] are passed to get_dummies, then all the columns are dummy-encoded, and a SparseDataFrame was returned.
A
["B", "C"]
get_dummies
In [2]: df = pd.DataFrame({"A": [1, 2], "B": ['a', 'b'], "C": ['a', 'a']}) In [3]: type(pd.get_dummies(df, sparse=True)) Out[3]: pandas.core.frame.DataFrame In [4]: type(pd.get_dummies(df[['B', 'C']], sparse=True)) Out[4]: pandas.core.sparse.frame.SparseDataFrame
Now, the return type is consistently a DataFrame.
In [78]: type(pd.get_dummies(df, sparse=True)) Out[78]: pandas.core.frame.DataFrame In [79]: type(pd.get_dummies(df[['B', 'C']], sparse=True)) Out[79]: pandas.core.frame.DataFrame
There’s no difference in memory usage between a SparseDataFrame and a DataFrame with sparse values. The memory usage will be the same as in the previous version of pandas.
DataFrame.to_dict(orient='index')
Bug in DataFrame.to_dict() raises ValueError when used with orient='index' and a non-unique index instead of losing data (GH22801)
DataFrame.to_dict()
ValueError
orient='index'
In [80]: df = pd.DataFrame({'a': [1, 2], 'b': [0.5, 0.75]}, index=['A', 'A']) In [81]: df Out[81]: a b A 1 0.50 A 2 0.75 [2 rows x 2 columns] In [82]: df.to_dict(orient='index') --------------------------------------------------------------------------- ValueError Traceback (most recent call last) <ipython-input-82-f5309a7c6adb> in <module> ----> 1 df.to_dict(orient='index') /pandas/pandas/core/frame.py in to_dict(self, orient, into) 1605 elif orient == "index": 1606 if not self.index.is_unique: -> 1607 raise ValueError("DataFrame index must be unique for orient='index'.") 1608 return into_c( 1609 (t[0], dict(zip(self.columns, t[1:]))) ValueError: DataFrame index must be unique for orient='index'.
Creating a Tick object (Day, Hour, Minute, Second, Milli, Micro, Nano) with normalize=True is no longer supported. This prevents unexpected behavior where addition could fail to be monotone or associative. (GH21427)
Tick
Day
Hour
Minute
Second
Milli
Micro
Nano
normalize=True
In [2]: ts = pd.Timestamp('2018-06-11 18:01:14') In [3]: ts Out[3]: Timestamp('2018-06-11 18:01:14') In [4]: tic = pd.offsets.Hour(n=2, normalize=True) ...: In [5]: tic Out[5]: <2 * Hours> In [6]: ts + tic Out[6]: Timestamp('2018-06-11 00:00:00') In [7]: ts + tic + tic + tic == ts + (tic + tic + tic) Out[7]: False
In [83]: ts = pd.Timestamp('2018-06-11 18:01:14') In [84]: tic = pd.offsets.Hour(n=2) In [85]: ts + tic + tic + tic == ts + (tic + tic + tic) Out[85]: True
Subtraction of a Period from another Period will give a DateOffset. instead of an integer (GH21314)
DateOffset
In [2]: june = pd.Period('June 2018') In [3]: april = pd.Period('April 2018') In [4]: june - april Out [4]: 2
In [86]: june = pd.Period('June 2018') In [87]: april = pd.Period('April 2018') In [88]: june - april Out[88]: <2 * MonthEnds>
Similarly, subtraction of a Period from a PeriodIndex will now return an Index of DateOffset objects instead of an Int64Index
Int64Index
In [2]: pi = pd.period_range('June 2018', freq='M', periods=3) In [3]: pi - pi[0] Out[3]: Int64Index([0, 1, 2], dtype='int64')
In [89]: pi = pd.period_range('June 2018', freq='M', periods=3) In [90]: pi - pi[0] Out[90]: Index([<0 * MonthEnds>, <MonthEnd>, <2 * MonthEnds>], dtype='object')
Adding or subtracting NaN from a DataFrame column with timedelta64[ns] dtype will now raise a TypeError instead of returning all-NaT. This is for compatibility with TimedeltaIndex and Series behavior (GH22163)
timedelta64[ns]
TypeError
TimedeltaIndex
In [91]: df = pd.DataFrame([pd.Timedelta(days=1)]) In [92]: df Out[92]: 0 0 1 days [1 rows x 1 columns]
In [4]: df = pd.DataFrame([pd.Timedelta(days=1)]) In [5]: df - np.nan Out[5]: 0 0 NaT
In [2]: df - np.nan ... TypeError: unsupported operand type(s) for -: 'TimedeltaIndex' and 'float'
Previously, the broadcasting behavior of DataFrame comparison operations (==, !=, …) was inconsistent with the behavior of arithmetic operations (+, -, …). The behavior of the comparison operations has been changed to match the arithmetic operations in these cases. (GH22880)
==
!=
+
-
The affected cases are:
operating against a 2-dimensional np.ndarray with either 1 row or 1 column will now broadcast the same way a np.ndarray would (GH23000).
np.ndarray
a list or tuple with length matching the number of rows in the DataFrame will now raise ValueError instead of operating column-by-column (GH22880.
a list or tuple with length matching the number of columns in the DataFrame will now operate row-by-row instead of raising ValueError (GH22880).
In [93]: arr = np.arange(6).reshape(3, 2) In [94]: df = pd.DataFrame(arr) In [95]: df Out[95]: 0 1 0 0 1 1 2 3 2 4 5 [3 rows x 2 columns]
In [5]: df == arr[[0], :] ...: # comparison previously broadcast where arithmetic would raise Out[5]: 0 1 0 True True 1 False False 2 False False In [6]: df + arr[[0], :] ... ValueError: Unable to coerce to DataFrame, shape must be (3, 2): given (1, 2) In [7]: df == (1, 2) ...: # length matches number of columns; ...: # comparison previously raised where arithmetic would broadcast ... ValueError: Invalid broadcasting comparison [(1, 2)] with block values In [8]: df + (1, 2) Out[8]: 0 1 0 1 3 1 3 5 2 5 7 In [9]: df == (1, 2, 3) ...: # length matches number of rows ...: # comparison previously broadcast where arithmetic would raise Out[9]: 0 1 0 False True 1 True False 2 False False In [10]: df + (1, 2, 3) ... ValueError: Unable to coerce to Series, length must be 2: given 3
# Comparison operations and arithmetic operations both broadcast. In [96]: df == arr[[0], :] Out[96]: 0 1 0 True True 1 False False 2 False False [3 rows x 2 columns] In [97]: df + arr[[0], :] Out[97]: 0 1 0 0 2 1 2 4 2 4 6 [3 rows x 2 columns]
# Comparison operations and arithmetic operations both broadcast. In [98]: df == (1, 2) Out[98]: 0 1 0 False False 1 False False 2 False False [3 rows x 2 columns] In [99]: df + (1, 2) Out[99]: 0 1 0 1 3 1 3 5 2 5 7 [3 rows x 2 columns]
# Comparison operations and arithmetic operations both raise ValueError. In [6]: df == (1, 2, 3) ... ValueError: Unable to coerce to Series, length must be 2: given 3 In [7]: df + (1, 2, 3) ... ValueError: Unable to coerce to Series, length must be 2: given 3
DataFrame arithmetic operations when operating with 2-dimensional np.ndarray objects now broadcast in the same way as np.ndarray broadcast. (GH23000)
In [100]: arr = np.arange(6).reshape(3, 2) In [101]: df = pd.DataFrame(arr) In [102]: df Out[102]: 0 1 0 0 1 1 2 3 2 4 5 [3 rows x 2 columns]
In [5]: df + arr[[0], :] # 1 row, 2 columns ... ValueError: Unable to coerce to DataFrame, shape must be (3, 2): given (1, 2) In [6]: df + arr[:, [1]] # 1 column, 3 rows ... ValueError: Unable to coerce to DataFrame, shape must be (3, 2): given (3, 1)
In [103]: df + arr[[0], :] # 1 row, 2 columns Out[103]: 0 1 0 0 2 1 2 4 2 4 6 [3 rows x 2 columns] In [104]: df + arr[:, [1]] # 1 column, 3 rows Out[104]: 0 1 0 1 2 1 5 6 2 9 10 [3 rows x 2 columns]
Series and Index constructors now raise when the data is incompatible with a passed dtype= (GH15832)
dtype=
In [4]: pd.Series([-1], dtype="uint64") Out [4]: 0 18446744073709551615 dtype: uint64
In [4]: pd.Series([-1], dtype="uint64") Out [4]: ... OverflowError: Trying to coerce negative values to unsigned integers
Calling pandas.concat() on a Categorical of ints with NA values now causes them to be processed as objects when concatenating with anything other than another Categorical of ints (GH19214)
pandas.concat()
In [105]: s = pd.Series([0, 1, np.nan]) In [106]: c = pd.Series([0, 1, np.nan], dtype="category")
In [3]: pd.concat([s, c]) Out[3]: 0 0.0 1 1.0 2 NaN 0 0.0 1 1.0 2 NaN dtype: float64
In [107]: pd.concat([s, c]) Out[107]: 0 0.0 1 1.0 2 NaN 0 0.0 1 1.0 2 NaN Length: 6, dtype: float64
For DatetimeIndex and TimedeltaIndex with non-None freq attribute, addition or subtraction of integer-dtyped array or Index will return an object of the same class (GH19959)
None
freq
DateOffset objects are now immutable. Attempting to alter one of these will now raise AttributeError (GH21341)
AttributeError
PeriodIndex subtraction of another PeriodIndex will now return an object-dtype Index of DateOffset objects instead of raising a TypeError (GH20049)
cut() and qcut() now returns a DatetimeIndex or TimedeltaIndex bins when the input is datetime or timedelta dtype respectively and retbins=True (GH19891)
cut()
qcut()
retbins=True
DatetimeIndex.to_period() and Timestamp.to_period() will issue a warning when timezone information will be lost (GH21333)
DatetimeIndex.to_period()
Timestamp.to_period()
PeriodIndex.tz_convert() and PeriodIndex.tz_localize() have been removed (GH21781)
PeriodIndex.tz_convert()
PeriodIndex.tz_localize()
A newly constructed empty DataFrame with integer as the dtype will now only be cast to float64 if index is specified (GH22858)
float64
Series.str.cat() will now raise if others is a set (GH23009)
Series.str.cat()
others
Passing scalar values to DatetimeIndex or TimedeltaIndex will now raise TypeError instead of ValueError (GH23539)
max_rows and max_cols parameters removed from HTMLFormatter since truncation is handled by DataFrameFormatter (GH23818)
max_rows
max_cols
HTMLFormatter
DataFrameFormatter
read_csv() will now raise a ValueError if a column with missing values is declared as having dtype bool (GH20591)
bool
The column order of the resultant DataFrame from MultiIndex.to_frame() is now guaranteed to match the MultiIndex.names order. (GH22420)
MultiIndex.to_frame()
MultiIndex.names
Incorrectly passing a DatetimeIndex to MultiIndex.from_tuples(), rather than a sequence of tuples, now raises a TypeError rather than a ValueError (GH24024)
MultiIndex.from_tuples()
pd.offsets.generate_range() argument time_rule has been removed; use offset instead (GH24157)
pd.offsets.generate_range()
time_rule
offset
In 0.23.x, pandas would raise a ValueError on a merge of a numeric column (e.g. int dtyped column) and an object dtyped column (GH9780). We have re-enabled the ability to merge object and other dtypes; pandas will still raise on a merge between a numeric and an object dtyped column that is composed only of strings (GH21681)
int
object
Accessing a level of a MultiIndex with a duplicate name (e.g. in get_level_values()) now raises a ValueError instead of a KeyError (GH21678).
get_level_values()
KeyError
Invalid construction of IntervalDtype will now always raise a TypeError rather than a ValueError if the subdtype is invalid (GH21185)
IntervalDtype
Trying to reindex a DataFrame with a non unique MultiIndex now raises a ValueError instead of an Exception (GH21770)
Exception
Index subtraction will attempt to operate element-wise instead of raising TypeError (GH19369)
pandas.io.formats.style.Styler supports a number-format property when using to_excel() (GH22015)
pandas.io.formats.style.Styler
number-format
to_excel()
DataFrame.corr() and Series.corr() now raise a ValueError along with a helpful error message instead of a KeyError when supplied with an invalid method (GH22298)
shift() will now always return a copy, instead of the previous behaviour of returning self when shifting by 0 (GH22397)
DataFrame.set_index() now gives a better (and less frequent) KeyError, raises a ValueError for incorrect types, and will not fail on duplicate column names with drop=True. (GH22484)
DataFrame.set_index()
drop=True
Slicing a single row of a DataFrame with multiple ExtensionArrays of the same type now preserves the dtype, rather than coercing to object (GH22784)
DateOffset attribute _cacheable and method _should_cache have been removed (GH23118)
_cacheable
_should_cache
Series.searchsorted(), when supplied a scalar value to search for, now returns a scalar instead of an array (GH23801).
Series.searchsorted()
Categorical.searchsorted(), when supplied a scalar value to search for, now returns a scalar instead of an array (GH23466).
Categorical.searchsorted()
Categorical.searchsorted() now raises a KeyError rather that a ValueError, if a searched for key is not found in its categories (GH23466).
Index.hasnans() and Series.hasnans() now always return a python boolean. Previously, a python or a numpy boolean could be returned, depending on circumstances (GH23294).
Index.hasnans()
Series.hasnans()
The order of the arguments of DataFrame.to_html() and DataFrame.to_string() is rearranged to be consistent with each other. (GH23614)
CategoricalIndex.reindex() now raises a ValueError if the target index is non-unique and not equal to the current index. It previously only raised if the target index was not of a categorical dtype (GH23963).
CategoricalIndex.reindex()
Series.to_list() and Index.to_list() are now aliases of Series.tolist respectively Index.tolist (GH8826)
Series.to_list()
Index.to_list()
Series.tolist
Index.tolist
The result of SparseSeries.unstack is now a DataFrame with sparse values, rather than a SparseDataFrame (GH24372).
SparseSeries.unstack
DatetimeIndex and TimedeltaIndex no longer ignore the dtype precision. Passing a non-nanosecond resolution dtype will raise a ValueError (GH24753)
Equality and hashability
pandas now requires that extension dtypes be hashable (i.e. the respective ExtensionDtype objects; hashability is not a requirement for the values of the corresponding ExtensionArray). The base class implements a default __eq__ and __hash__. If you have a parametrized dtype, you should update the ExtensionDtype._metadata tuple to match the signature of your __init__ method. See pandas.api.extensions.ExtensionDtype for more (GH22476).
__eq__
__hash__
ExtensionDtype._metadata
__init__
pandas.api.extensions.ExtensionDtype
New and changed methods
dropna() has been added (GH21185)
dropna()
repeat() has been added (GH24349)
repeat()
The ExtensionArray constructor, _from_sequence now take the keyword arg copy=False (GH21185)
_from_sequence
copy=False
pandas.api.extensions.ExtensionArray.shift() added as part of the basic ExtensionArray interface (GH22387).
pandas.api.extensions.ExtensionArray.shift()
searchsorted() has been added (GH24350)
searchsorted()
Support for reduction operations such as sum, mean via opt-in base class method override (GH22762)
mean
ExtensionArray.isna() is allowed to return an ExtensionArray (GH22325).
ExtensionArray.isna()
Dtype changes
ExtensionDtype has gained the ability to instantiate from string dtypes, e.g. decimal would instantiate a registered DecimalDtype; furthermore the ExtensionDtype has gained the method construct_array_type (GH21185)
DecimalDtype
construct_array_type
Added ExtensionDtype._is_numeric for controlling whether an extension dtype is considered numeric (GH22290).
ExtensionDtype._is_numeric
Added pandas.api.types.register_extension_dtype() to register an extension type with pandas (GH22664)
pandas.api.types.register_extension_dtype()
Updated the .type attribute for PeriodDtype, DatetimeTZDtype, and IntervalDtype to be instances of the dtype (Period, Timestamp, and Interval respectively) (GH22938)
.type
PeriodDtype
DatetimeTZDtype
Operator support
A Series based on an ExtensionArray now supports arithmetic and comparison operators (GH19577). There are two approaches for providing operator support for an ExtensionArray:
Define each of the operators on your ExtensionArray subclass.
Use an operator implementation from pandas that depends on operators that are already defined on the underlying elements (scalars) of the ExtensionArray.
See the ExtensionArray Operator Support documentation section for details on both ways of adding operator support.
Other changes
A default repr for pandas.api.extensions.ExtensionArray is now provided (GH23601).
pandas.api.extensions.ExtensionArray
ExtensionArray._formatting_values() is deprecated. Use ExtensionArray._formatter instead. (GH23601)
ExtensionArray._formatting_values()
ExtensionArray._formatter
An ExtensionArray with a boolean dtype now works correctly as a boolean indexer. pandas.api.types.is_bool_dtype() now properly considers them boolean (GH22326)
pandas.api.types.is_bool_dtype()
Bug fixes
Bug in Series.get() for Series using ExtensionArray and integer index (GH21257)
Series.get()
shift() now dispatches to ExtensionArray.shift() (GH22386)
ExtensionArray.shift()
Series.combine() works correctly with ExtensionArray inside of Series (GH20825)
Series.combine()
Series.combine() with scalar argument now works for any function type (GH21248)
Series.astype() and DataFrame.astype() now dispatch to ExtensionArray.astype() (GH21185).
Series.astype()
DataFrame.astype()
ExtensionArray.astype()
Bug when concatenating multiple Series with different extension dtypes not casting to object dtype (GH22994)
Series backed by an ExtensionArray now work with util.hash_pandas_object() (GH23066)
util.hash_pandas_object()
DataFrame.stack() no longer converts to object dtype for DataFrames where each column has the same extension dtype. The output Series will have the same dtype as the columns (GH23077).
DataFrame.stack()
Series.unstack() and DataFrame.unstack() no longer convert extension arrays to object-dtype ndarrays. Each column in the output DataFrame will now have the same dtype as the input (GH23077).
Series.unstack()
DataFrame.unstack()
Bug when grouping Dataframe.groupby() and aggregating on ExtensionArray it was not returning the actual ExtensionArray dtype (GH23227).
Dataframe.groupby()
Bug in pandas.merge() when merging on an extension array-backed column (GH23020).
pandas.merge()
MultiIndex.labels has been deprecated and replaced by MultiIndex.codes. The functionality is unchanged. The new name better reflects the natures of these codes and makes the MultiIndex API more similar to the API for CategoricalIndex (GH13443). As a consequence, other uses of the name labels in MultiIndex have also been deprecated and replaced with codes:
MultiIndex.labels
MultiIndex.codes
CategoricalIndex
labels
codes
You should initialize a MultiIndex instance using a parameter named codes rather than labels.
MultiIndex.set_labels has been deprecated in favor of MultiIndex.set_codes().
MultiIndex.set_labels
MultiIndex.set_codes()
For method MultiIndex.copy(), the labels parameter has been deprecated and replaced by a codes parameter.
MultiIndex.copy()
DataFrame.to_stata(), read_stata(), StataReader and StataWriter have deprecated the encoding argument. The encoding of a Stata dta file is determined by the file type and cannot be changed (GH21244)
read_stata()
StataReader
StataWriter
encoding
MultiIndex.to_hierarchical() is deprecated and will be removed in a future version (GH21613)
MultiIndex.to_hierarchical()
Series.ptp() is deprecated. Use numpy.ptp instead (GH21614)
Series.ptp()
numpy.ptp
Series.compress() is deprecated. Use Series[condition] instead (GH18262)
Series.compress()
Series[condition]
The signature of Series.to_csv() has been uniformed to that of DataFrame.to_csv(): the name of the first argument is now path_or_buf, the order of subsequent arguments has changed, the header argument now defaults to True. (GH19715)
path_or_buf
Categorical.from_codes() has deprecated providing float values for the codes argument. (GH21767)
pandas.read_table() is deprecated. Instead, use read_csv() passing sep='\t' if necessary. This deprecation has been removed in 0.25.0. (GH21948)
pandas.read_table()
sep='\t'
Series.str.cat() has deprecated using arbitrary list-likes within list-likes. A list-like container may still contain many Series, Index or 1-dimensional np.ndarray, or alternatively, only scalar values. (GH21950)
FrozenNDArray.searchsorted() has deprecated the v parameter in favor of value (GH14645)
FrozenNDArray.searchsorted()
v
DatetimeIndex.shift() and PeriodIndex.shift() now accept periods argument instead of n for consistency with Index.shift() and Series.shift(). Using n throws a deprecation warning (GH22458, GH22912)
DatetimeIndex.shift()
PeriodIndex.shift()
periods
n
Index.shift()
Series.shift()
The fastpath keyword of the different Index constructors is deprecated (GH23110).
fastpath
Timestamp.tz_localize(), DatetimeIndex.tz_localize(), and Series.tz_localize() have deprecated the errors argument in favor of the nonexistent argument (GH8917)
errors
The class FrozenNDArray has been deprecated. When unpickling, FrozenNDArray will be unpickled to np.ndarray once this class is removed (GH9031)
FrozenNDArray
The methods DataFrame.update() and Panel.update() have deprecated the raise_conflict=False|True keyword in favor of errors='ignore'|'raise' (GH23585)
DataFrame.update()
Panel.update()
raise_conflict=False|True
errors='ignore'|'raise'
The methods Series.str.partition() and Series.str.rpartition() have deprecated the pat keyword in favor of sep (GH22676)
Series.str.partition()
Series.str.rpartition()
pat
sep
Deprecated the nthreads keyword of pandas.read_feather() in favor of use_threads to reflect the changes in pyarrow>=0.11.0. (GH23053)
nthreads
pandas.read_feather()
use_threads
pyarrow>=0.11.0
pandas.read_excel() has deprecated accepting usecols as an integer. Please pass in a list of ints from 0 to usecols inclusive instead (GH23527)
pandas.read_excel()
Constructing a TimedeltaIndex from data with datetime64-dtyped data is deprecated, will raise TypeError in a future version (GH23539)
datetime64
Constructing a DatetimeIndex from data with timedelta64-dtyped data is deprecated, will raise TypeError in a future version (GH23675)
timedelta64
The keep_tz=False option (the default) of the keep_tz keyword of DatetimeIndex.to_series() is deprecated (GH17832).
keep_tz=False
keep_tz
DatetimeIndex.to_series()
Timezone converting a tz-aware datetime.datetime or Timestamp with Timestamp and the tz argument is now deprecated. Instead, use Timestamp.tz_convert() (GH23579)
Timestamp.tz_convert()
pandas.api.types.is_period() is deprecated in favor of pandas.api.types.is_period_dtype (GH23917)
pandas.api.types.is_period()
pandas.api.types.is_period_dtype
pandas.api.types.is_datetimetz() is deprecated in favor of pandas.api.types.is_datetime64tz (GH23917)
pandas.api.types.is_datetimetz()
pandas.api.types.is_datetime64tz
Creating a TimedeltaIndex, DatetimeIndex, or PeriodIndex by passing range arguments start, end, and periods is deprecated in favor of timedelta_range(), date_range(), or period_range() (GH23919)
start
end
timedelta_range()
date_range()
period_range()
Passing a string alias like 'datetime64[ns, UTC]' as the unit parameter to DatetimeTZDtype is deprecated. Use DatetimeTZDtype.construct_from_string instead (GH23990).
'datetime64[ns, UTC]'
unit
DatetimeTZDtype.construct_from_string
The skipna parameter of infer_dtype() will switch to True by default in a future version of pandas (GH17066, GH24050)
skipna
infer_dtype()
In Series.where() with Categorical data, providing an other that is not present in the categories is deprecated. Convert the categorical to a different dtype or add the other to the categories first (GH24077).
other
Series.clip_lower(), Series.clip_upper(), DataFrame.clip_lower() and DataFrame.clip_upper() are deprecated and will be removed in a future version. Use Series.clip(lower=threshold), Series.clip(upper=threshold) and the equivalent DataFrame methods (GH24203)
Series.clip_lower()
Series.clip_upper()
DataFrame.clip_lower()
DataFrame.clip_upper()
Series.clip(lower=threshold)
Series.clip(upper=threshold)
Series.nonzero() is deprecated and will be removed in a future version (GH18262)
Series.nonzero()
Passing an integer to Series.fillna() and DataFrame.fillna() with timedelta64[ns] dtypes is deprecated, will raise TypeError in a future version. Use obj.fillna(pd.Timedelta(...)) instead (GH24694)
Series.fillna()
DataFrame.fillna()
obj.fillna(pd.Timedelta(...))
Series.cat.categorical, Series.cat.name and Series.cat.index have been deprecated. Use the attributes on Series.cat or Series directly. (GH24751).
Series.cat.categorical
Series.cat.name
Series.cat.index
Series.cat
Passing a dtype without a precision like np.dtype('datetime64') or timedelta64 to Index, DatetimeIndex and TimedeltaIndex is now deprecated. Use the nanosecond-precision dtype instead (GH24753).
np.dtype('datetime64')
In the past, users could—in some cases—add or subtract integers or integer-dtype arrays from Timestamp, DatetimeIndex and TimedeltaIndex.
This usage is now deprecated. Instead add or subtract integer multiples of the object’s freq attribute (GH21939, GH23878).
In [5]: ts = pd.Timestamp('1994-05-06 12:15:16', freq=pd.offsets.Hour()) In [6]: ts + 2 Out[6]: Timestamp('1994-05-06 14:15:16', freq='H') In [7]: tdi = pd.timedelta_range('1D', periods=2) In [8]: tdi - np.array([2, 1]) Out[8]: TimedeltaIndex(['-1 days', '1 days'], dtype='timedelta64[ns]', freq=None) In [9]: dti = pd.date_range('2001-01-01', periods=2, freq='7D') In [10]: dti + pd.Index([1, 2]) Out[10]: DatetimeIndex(['2001-01-08', '2001-01-22'], dtype='datetime64[ns]', freq=None)
In [108]: ts = pd.Timestamp('1994-05-06 12:15:16', freq=pd.offsets.Hour()) In [109]: ts + 2 * ts.freq Out[109]: Timestamp('1994-05-06 14:15:16', freq='H') In [110]: tdi = pd.timedelta_range('1D', periods=2) In [111]: tdi - np.array([2 * tdi.freq, 1 * tdi.freq]) Out[111]: TimedeltaIndex(['-1 days', '1 days'], dtype='timedelta64[ns]', freq=None) In [112]: dti = pd.date_range('2001-01-01', periods=2, freq='7D') In [113]: dti + pd.Index([1 * dti.freq, 2 * dti.freq]) Out[113]: DatetimeIndex(['2001-01-08', '2001-01-22'], dtype='datetime64[ns]', freq=None)
The behavior of DatetimeIndex when passed integer data and a timezone is changing in a future version of pandas. Previously, these were interpreted as wall times in the desired timezone. In the future, these will be interpreted as wall times in UTC, which are then converted to the desired timezone (GH24559).
The default behavior remains the same, but issues a warning:
In [3]: pd.DatetimeIndex([946684800000000000], tz="US/Central") /bin/ipython:1: FutureWarning: Passing integer-dtype data and a timezone to DatetimeIndex. Integer values will be interpreted differently in a future version of pandas. Previously, these were viewed as datetime64[ns] values representing the wall time *in the specified timezone*. In the future, these will be viewed as datetime64[ns] values representing the wall time *in UTC*. This is similar to a nanosecond-precision UNIX epoch. To accept the future behavior, use pd.to_datetime(integer_data, utc=True).tz_convert(tz) To keep the previous behavior, use pd.to_datetime(integer_data).tz_localize(tz) #!/bin/python3 Out[3]: DatetimeIndex(['2000-01-01 00:00:00-06:00'], dtype='datetime64[ns, US/Central]', freq=None)
As the warning message explains, opt in to the future behavior by specifying that the integer values are UTC, and then converting to the final timezone:
In [114]: pd.to_datetime([946684800000000000], utc=True).tz_convert('US/Central') Out[114]: DatetimeIndex(['1999-12-31 18:00:00-06:00'], dtype='datetime64[ns, US/Central]', freq=None)
The old behavior can be retained with by localizing directly to the final timezone:
In [115]: pd.to_datetime([946684800000000000]).tz_localize('US/Central') Out[115]: DatetimeIndex(['2000-01-01 00:00:00-06:00'], dtype='datetime64[ns, US/Central]', freq=None)
The conversion from a Series or Index with timezone-aware datetime data will change to preserve timezones by default (GH23569).
NumPy doesn’t have a dedicated dtype for timezone-aware datetimes. In the past, converting a Series or DatetimeIndex with timezone-aware datatimes would convert to a NumPy array by
converting the tz-aware data to UTC
dropping the timezone-info
returning a numpy.ndarray with datetime64[ns] dtype
datetime64[ns]
Future versions of pandas will preserve the timezone information by returning an object-dtype NumPy array where each value is a Timestamp with the correct timezone attached
In [116]: ser = pd.Series(pd.date_range('2000', periods=2, tz="CET")) In [117]: ser Out[117]: 0 2000-01-01 00:00:00+01:00 1 2000-01-02 00:00:00+01:00 Length: 2, dtype: datetime64[ns, CET]
The default behavior remains the same, but issues a warning
In [8]: np.asarray(ser) /bin/ipython:1: FutureWarning: Converting timezone-aware DatetimeArray to timezone-naive ndarray with 'datetime64[ns]' dtype. In the future, this will return an ndarray with 'object' dtype where each element is a 'pandas.Timestamp' with the correct 'tz'. To accept the future behavior, pass 'dtype=object'. To keep the old behavior, pass 'dtype="datetime64[ns]"'. #!/bin/python3 Out[8]: array(['1999-12-31T23:00:00.000000000', '2000-01-01T23:00:00.000000000'], dtype='datetime64[ns]')
The previous or future behavior can be obtained, without any warnings, by specifying the dtype
In [118]: np.asarray(ser, dtype='datetime64[ns]') Out[118]: array(['1999-12-31T23:00:00.000000000', '2000-01-01T23:00:00.000000000'], dtype='datetime64[ns]')
Future behavior
# New behavior In [119]: np.asarray(ser, dtype=object) Out[119]: array([Timestamp('2000-01-01 00:00:00+0100', tz='CET', freq='D'), Timestamp('2000-01-02 00:00:00+0100', tz='CET', freq='D')], dtype=object)
Or by using Series.to_numpy()
In [120]: ser.to_numpy() Out[120]: array([Timestamp('2000-01-01 00:00:00+0100', tz='CET', freq='D'), Timestamp('2000-01-02 00:00:00+0100', tz='CET', freq='D')], dtype=object) In [121]: ser.to_numpy(dtype="datetime64[ns]") Out[121]: array(['1999-12-31T23:00:00.000000000', '2000-01-01T23:00:00.000000000'], dtype='datetime64[ns]')
All the above applies to a DatetimeIndex with tz-aware values as well.
The LongPanel and WidePanel classes have been removed (GH10892)
LongPanel
WidePanel
Series.repeat() has renamed the reps argument to repeats (GH14645)
Series.repeat()
reps
repeats
Several private functions were removed from the (non-public) module pandas.core.common (GH22001)
pandas.core.common
Removal of the previously deprecated module pandas.core.datetools (GH14105, GH14094)
pandas.core.datetools
Strings passed into DataFrame.groupby() that refer to both column and index levels will raise a ValueError (GH14432)
DataFrame.groupby()
Index.repeat() and MultiIndex.repeat() have renamed the n argument to repeats (GH14645)
Index.repeat()
MultiIndex.repeat()
The Series constructor and .astype method will now raise a ValueError if timestamp dtypes are passed in without a unit (e.g. np.datetime64) for the dtype parameter (GH15987)
.astype
np.datetime64
Removal of the previously deprecated as_indexer keyword completely from str.match() (GH22356, GH6581)
as_indexer
str.match()
The modules pandas.types, pandas.computation, and pandas.util.decorators have been removed (GH16157, GH16250)
pandas.types
pandas.computation
pandas.util.decorators
Removed the pandas.formats.style shim for pandas.io.formats.style.Styler (GH16059)
pandas.formats.style
pandas.pnow, pandas.match, pandas.groupby, pd.get_store, pd.Expr, and pd.Term have been removed (GH15538, GH15940)
pandas.pnow
pandas.match
pandas.groupby
pd.get_store
pd.Expr
pd.Term
Categorical.searchsorted() and Series.searchsorted() have renamed the v argument to value (GH14645)
pandas.parser, pandas.lib, and pandas.tslib have been removed (GH15537)
pandas.parser
pandas.lib
pandas.tslib
Index.searchsorted() have renamed the key argument to value (GH14645)
Index.searchsorted()
key
DataFrame.consolidate and Series.consolidate have been removed (GH15501)
DataFrame.consolidate
Series.consolidate
Removal of the previously deprecated module pandas.json (GH19944)
pandas.json
The module pandas.tools has been removed (GH15358, GH16005)
pandas.tools
SparseArray.get_values() and SparseArray.to_dense() have dropped the fill parameter (GH14686)
SparseArray.get_values()
SparseArray.to_dense()
fill
DataFrame.sortlevel and Series.sortlevel have been removed (GH15099)
DataFrame.sortlevel
Series.sortlevel
SparseSeries.to_dense() has dropped the sparse_only parameter (GH14686)
SparseSeries.to_dense()
sparse_only
DataFrame.astype() and Series.astype() have renamed the raise_on_error argument to errors (GH14967)
raise_on_error
is_sequence, is_any_int_dtype, and is_floating_dtype have been removed from pandas.api.types (GH16163, GH16189)
is_sequence
is_any_int_dtype
is_floating_dtype
pandas.api.types
Slicing Series and DataFrames with an monotonically increasing CategoricalIndex is now very fast and has speed comparable to slicing with an Int64Index. The speed increase is both when indexing by label (using .loc) and position(.iloc) (GH20395) Slicing a monotonically increasing CategoricalIndex itself (i.e. ci[1000:2000]) shows similar speed improvements as above (GH21659)
ci[1000:2000]
Improved performance of CategoricalIndex.equals() when comparing to another CategoricalIndex (GH24023)
CategoricalIndex.equals()
Improved performance of Series.describe() in case of numeric dtpyes (GH21274)
Series.describe()
Improved performance of pandas.core.groupby.GroupBy.rank() when dealing with tied rankings (GH21237)
pandas.core.groupby.GroupBy.rank()
Improved performance of DataFrame.set_index() with columns consisting of Period objects (GH21582, GH21606)
Improved performance of Series.at() and Index.get_value() for Extension Arrays values (e.g. Categorical) (GH24204)
Series.at()
Index.get_value()
Improved performance of membership checks in Categorical and CategoricalIndex (i.e. x in cat-style checks are much faster). CategoricalIndex.contains() is likewise much faster (GH21369, GH21508)
x in cat
CategoricalIndex.contains()
Improved performance of HDFStore.groups() (and dependent functions like HDFStore.keys(). (i.e. x in store checks are much faster) (GH21372)
HDFStore.groups()
HDFStore.keys()
x in store
Improved the performance of pandas.get_dummies() with sparse=True (GH21997)
pandas.get_dummies()
Improved performance of IndexEngine.get_indexer_non_unique() for sorted, non-unique indexes (GH9466)
IndexEngine.get_indexer_non_unique()
Improved performance of PeriodIndex.unique() (GH23083)
PeriodIndex.unique()
Improved performance of concat() for Series objects (GH23404)
Improved performance of DatetimeIndex.normalize() and Timestamp.normalize() for timezone naive or UTC datetimes (GH23634)
DatetimeIndex.normalize()
Timestamp.normalize()
Improved performance of DatetimeIndex.tz_localize() and various DatetimeIndex attributes with dateutil UTC timezone (GH23772)
Fixed a performance regression on Windows with Python 3.7 of read_csv() (GH23516)
Improved performance of Categorical constructor for Series objects (GH23814)
Improved performance of where() for Categorical data (GH24077)
where()
Improved performance of iterating over a Series. Using DataFrame.itertuples() now creates iterators without internally allocating lists of all elements (GH20783)
DataFrame.itertuples()
Improved performance of Period constructor, additionally benefitting PeriodArray and PeriodIndex creation (GH24084, GH24118)
PeriodArray
Improved performance of tz-aware DatetimeArray binary operations (GH24491)
DatetimeArray
Bug in Categorical.from_codes() where NaN values in codes were silently converted to 0 (GH21767). In the future this will raise a ValueError. Also changes the behavior of .from_codes([1.1, 2.0]).
0
.from_codes([1.1, 2.0])
Bug in Categorical.sort_values() where NaN values were always positioned in front regardless of na_position value. (GH22556).
Categorical.sort_values()
na_position
Bug when indexing with a boolean-valued Categorical. Now a boolean-valued Categorical is treated as a boolean mask (GH22665)
Constructing a CategoricalIndex with empty values and boolean categories was raising a ValueError after a change to dtype coercion (GH22702).
Bug in Categorical.take() with a user-provided fill_value not encoding the fill_value, which could result in a ValueError, incorrect results, or a segmentation fault (GH23296).
Categorical.take()
In Series.unstack(), specifying a fill_value not present in the categories now raises a TypeError rather than ignoring the fill_value (GH23284)
Bug when resampling DataFrame.resample() and aggregating on categorical data, the categorical dtype was getting lost. (GH23227)
Bug in many methods of the .str-accessor, which always failed on calling the CategoricalIndex.str constructor (GH23555, GH23556)
.str
CategoricalIndex.str
Bug in Series.where() losing the categorical dtype for categorical data (GH24077)
Bug in Categorical.apply() where NaN values could be handled unpredictably. They now remain unchanged (GH24241)
Categorical.apply()
Bug in Categorical comparison methods incorrectly raising ValueError when operating against a DataFrame (GH24630)
Bug in Categorical.set_categories() where setting fewer new categories with rename=True caused a segmentation fault (GH24675)
Categorical.set_categories()
rename=True
Fixed bug where two DateOffset objects with different normalize attributes could evaluate as equal (GH21404)
normalize
Fixed bug where Timestamp.resolution() incorrectly returned 1-microsecond timedelta instead of 1-nanosecond Timedelta (GH21336, GH21365)
Timestamp.resolution()
timedelta
Timedelta
Bug in to_datetime() that did not consistently return an Index when box=True was specified (GH21864)
box=True
Bug in DatetimeIndex comparisons where string comparisons incorrectly raises TypeError (GH22074)
Bug in DatetimeIndex comparisons when comparing against timedelta64[ns] dtyped arrays; in some cases TypeError was incorrectly raised, in others it incorrectly failed to raise (GH22074)
Bug in DatetimeIndex comparisons when comparing against object-dtyped arrays (GH22074)
Bug in DataFrame with datetime64[ns] dtype addition and subtraction with Timedelta-like objects (GH22005, GH22163)
Bug in DataFrame with datetime64[ns] dtype addition and subtraction with DateOffset objects returning an object dtype instead of datetime64[ns] dtype (GH21610, GH22163)
Bug in DataFrame with datetime64[ns] dtype comparing against NaT incorrectly (GH22242, GH22163)
Bug in DataFrame with datetime64[ns] dtype subtracting Timestamp-like object incorrectly returned datetime64[ns] dtype instead of timedelta64[ns] dtype (GH8554, GH22163)
Bug in DataFrame with datetime64[ns] dtype subtracting np.datetime64 object with non-nanosecond unit failing to convert to nanoseconds (GH18874, GH22163)
Bug in DataFrame comparisons against Timestamp-like objects failing to raise TypeError for inequality checks with mismatched types (GH8932, GH22163)
Bug in DataFrame with mixed dtypes including datetime64[ns] incorrectly raising TypeError on equality comparisons (GH13128, GH22163)
Bug in DataFrame.values returning a DatetimeIndex for a single-column DataFrame with tz-aware datetime values. Now a 2-D numpy.ndarray of Timestamp objects is returned (GH24024)
Bug in DataFrame.eq() comparison against NaT incorrectly returning True or NaN (GH15697, GH22163)
DataFrame.eq()
Bug in DatetimeIndex subtraction that incorrectly failed to raise OverflowError (GH22492, GH22508)
OverflowError
Bug in DatetimeIndex incorrectly allowing indexing with Timedelta object (GH20464)
Bug in DatetimeIndex where frequency was being set if original frequency was None (GH22150)
Bug in rounding methods of DatetimeIndex (round(), ceil(), floor()) and Timestamp (round(), ceil(), floor()) could give rise to loss of precision (GH22591)
Bug in to_datetime() with an Index argument that would drop the name from the result (GH21697)
name
Bug in PeriodIndex where adding or subtracting a timedelta or Tick object produced incorrect results (GH22988)
Bug in the Series repr with period-dtype data missing a space before the data (GH23601)
Bug in date_range() when decrementing a start date to a past end date by a negative frequency (GH23270)
Bug in Series.min() which would return NaN instead of NaT when called on a series of NaT (GH23282)
Series.min()
Bug in Series.combine_first() not properly aligning categoricals, so that missing values in self where not filled by valid values from other (GH24147)
Series.combine_first()
self
Bug in DataFrame.combine() with datetimelike values raising a TypeError (GH23079)
DataFrame.combine()
Bug in date_range() with frequency of Day or higher where dates sufficiently far in the future could wrap around to the past instead of raising OutOfBoundsDatetime (GH14187)
OutOfBoundsDatetime
Bug in period_range() ignoring the frequency of start and end when those are provided as Period objects (GH20535).
Bug in PeriodIndex with attribute freq.n greater than 1 where adding a DateOffset object would return incorrect results (GH23215)
freq.n
Bug in Series that interpreted string indices as lists of characters when setting datetimelike values (GH23451)
Bug in DataFrame when creating a new column from an ndarray of Timestamp objects with timezones creating an object-dtype column, rather than datetime with timezone (GH23932)
Bug in Timestamp constructor which would drop the frequency of an input Timestamp (GH22311)
Bug in DatetimeIndex where calling np.array(dtindex, dtype=object) would incorrectly return an array of long objects (GH23524)
np.array(dtindex, dtype=object)
long
Bug in Index where passing a timezone-aware DatetimeIndex and dtype=object would incorrectly raise a ValueError (GH23524)
dtype=object
Bug in Index where calling np.array(dtindex, dtype=object) on a timezone-naive DatetimeIndex would return an array of datetime objects instead of Timestamp objects, potentially losing nanosecond portions of the timestamps (GH23524)
datetime
Bug in Categorical.__setitem__ not allowing setting with another Categorical when both are unordered and have the same categories, but in a different order (GH24142)
Categorical.__setitem__
Bug in date_range() where using dates with millisecond resolution or higher could return incorrect values or the wrong number of values in the index (GH24110)
Bug in DatetimeIndex where constructing a DatetimeIndex from a Categorical or CategoricalIndex would incorrectly drop timezone information (GH18664)
Bug in DatetimeIndex and TimedeltaIndex where indexing with Ellipsis would incorrectly lose the index’s freq attribute (GH21282)
Ellipsis
Clarified error message produced when passing an incorrect freq argument to DatetimeIndex with NaT as the first entry in the passed data (GH11587)
Bug in to_datetime() where box and utc arguments were ignored when passing a DataFrame or dict of unit mappings (GH23760)
box
utc
dict
Bug in Series.dt where the cache would not update properly after an in-place operation (GH24408)
Series.dt
Bug in PeriodIndex where comparisons against an array-like object with length 1 failed to raise ValueError (GH23078)
Bug in DatetimeIndex.astype(), PeriodIndex.astype() and TimedeltaIndex.astype() ignoring the sign of the dtype for unsigned integer dtypes (GH24405).
DatetimeIndex.astype()
PeriodIndex.astype()
TimedeltaIndex.astype()
Fixed bug in Series.max() with datetime64[ns]-dtype failing to return NaT when nulls are present and skipna=False is passed (GH24265)
Series.max()
skipna=False
Bug in to_datetime() where arrays of datetime objects containing both timezone-aware and timezone-naive datetimes would fail to raise ValueError (GH24569)
datetimes
Bug in to_datetime() with invalid datetime format doesn’t coerce input to NaT even if errors='coerce' (GH24763)
errors='coerce'
Bug in DataFrame with timedelta64[ns] dtype division by Timedelta-like scalar incorrectly returning timedelta64[ns] dtype instead of float64 dtype (GH20088, GH22163)
Bug in adding a Index with object dtype to a Series with timedelta64[ns] dtype incorrectly raising (GH22390)
Bug in multiplying a Series with numeric dtype against a timedelta object (GH22390)
Bug in Series with numeric dtype when adding or subtracting an array or Series with timedelta64 dtype (GH22390)
Bug in Index with numeric dtype when multiplying or dividing an array with dtype timedelta64 (GH22390)
Bug in TimedeltaIndex incorrectly allowing indexing with Timestamp object (GH20464)
Fixed bug where subtracting Timedelta from an object-dtyped array would raise TypeError (GH21980)
Fixed bug in adding a DataFrame with all-timedelta64[ns] dtypes to a DataFrame with all-integer dtypes returning incorrect results instead of raising TypeError (GH22696)
Bug in TimedeltaIndex where adding a timezone-aware datetime scalar incorrectly returned a timezone-naive DatetimeIndex (GH23215)
Bug in TimedeltaIndex where adding np.timedelta64('NaT') incorrectly returned an all-NaT DatetimeIndex instead of an all-NaT TimedeltaIndex (GH23215)
np.timedelta64('NaT')
Bug in Timedelta and to_timedelta() have inconsistencies in supported unit string (GH21762)
Bug in TimedeltaIndex division where dividing by another TimedeltaIndex raised TypeError instead of returning a Float64Index (GH23829, GH22631)
Float64Index
Bug in TimedeltaIndex comparison operations where comparing against non-Timedelta-like objects would raise TypeError instead of returning all-False for __eq__ and all-True for __ne__ (GH24056)
__ne__
Bug in Timedelta comparisons when comparing with a Tick object incorrectly raising TypeError (GH24710)
Bug in Index.shift() where an AssertionError would raise when shifting across DST (GH8616)
AssertionError
Bug in Timestamp constructor where passing an invalid timezone offset designator (Z) would not raise a ValueError (GH8910)
Z
Bug in Timestamp.replace() where replacing at a DST boundary would retain an incorrect offset (GH7825)
Timestamp.replace()
Bug in Series.replace() with datetime64[ns, tz] data when replacing NaT (GH11792)
Series.replace()
datetime64[ns, tz]
Bug in Timestamp when passing different string date formats with a timezone offset would produce different timezone offsets (GH12064)
Bug when comparing a tz-naive Timestamp to a tz-aware DatetimeIndex which would coerce the DatetimeIndex to tz-naive (GH12601)
Bug in Series.truncate() with a tz-aware DatetimeIndex which would cause a core dump (GH9243)
Series.truncate()
Bug in Series constructor which would coerce tz-aware and tz-naive Timestamp to tz-aware (GH13051)
Bug in Index with datetime64[ns, tz] dtype that did not localize integer data correctly (GH20964)
Bug in DatetimeIndex where constructing with an integer and tz would not localize correctly (GH12619)
Fixed bug where DataFrame.describe() and Series.describe() on tz-aware datetimes did not show first and last result (GH21328)
DataFrame.describe()
first
last
Bug in DatetimeIndex comparisons failing to raise TypeError when comparing timezone-aware DatetimeIndex against np.datetime64 (GH22074)
Bug in DataFrame assignment with a timezone-aware scalar (GH19843)
Bug in DataFrame.asof() that raised a TypeError when attempting to compare tz-naive and tz-aware timestamps (GH21194)
DataFrame.asof()
Bug when constructing a DatetimeIndex with Timestamp constructed with the replace method across DST (GH18785)
replace
Bug when setting a new value with DataFrame.loc() with a DatetimeIndex with a DST transition (GH18308, GH20724)
DataFrame.loc()
Bug in Index.unique() that did not re-localize tz-aware dates correctly (GH21737)
Index.unique()
Bug when indexing a Series with a DST transition (GH21846)
Bug in DataFrame.resample() and Series.resample() where an AmbiguousTimeError or NonExistentTimeError would raise if a timezone aware timeseries ended on a DST transition (GH19375, GH10117)
AmbiguousTimeError
NonExistentTimeError
Bug in DataFrame.drop() and Series.drop() when specifying a tz-aware Timestamp key to drop from a DatetimeIndex with a DST transition (GH21761)
DataFrame.drop()
Series.drop()
Bug in DatetimeIndex constructor where NaT and dateutil.tz.tzlocal would raise an OutOfBoundsDatetime error (GH23807)
dateutil.tz.tzlocal
Bug in DatetimeIndex.tz_localize() and Timestamp.tz_localize() with dateutil.tz.tzlocal near a DST transition that would return an incorrectly localized datetime (GH23807)
Bug in Timestamp constructor where a dateutil.tz.tzutc timezone passed with a datetime.datetime argument would be converted to a pytz.UTC timezone (GH23807)
dateutil.tz.tzutc
pytz.UTC
Bug in to_datetime() where utc=True was not respected when specifying a unit and errors='ignore' (GH23758)
errors='ignore'
Bug in to_datetime() where utc=True was not respected when passing a Timestamp (GH24415)
Bug in DataFrame.any() returns wrong value when axis=1 and the data is of datetimelike type (GH23070)
DataFrame.any()
axis=1
Bug in DatetimeIndex.to_period() where a timezone aware index was converted to UTC first before creating PeriodIndex (GH22905)
Bug in DataFrame.tz_localize(), DataFrame.tz_convert(), Series.tz_localize(), and Series.tz_convert() where copy=False would mutate the original argument inplace (GH6326)
DataFrame.tz_localize()
DataFrame.tz_convert()
Series.tz_convert()
Bug in DataFrame.max() and DataFrame.min() with axis=1 where a Series with NaN would be returned when all columns contained the same timezone (GH10390)
DataFrame.max()
DataFrame.min()
Bug in FY5253 where date offsets could incorrectly raise an AssertionError in arithmetic operations (GH14774)
FY5253
Bug in DateOffset where keyword arguments week and milliseconds were accepted and ignored. Passing these will now raise ValueError (GH19398)
week
milliseconds
Bug in adding DateOffset with DataFrame or PeriodIndex incorrectly raising TypeError (GH23215)
Bug in comparing DateOffset objects with non-DateOffset objects, particularly strings, raising ValueError instead of returning False for equality checks and True for not-equal checks (GH23524)
Bug in Series __rmatmul__ doesn’t support matrix vector multiplication (GH21530)
__rmatmul__
Bug in factorize() fails with read-only array (GH12813)
factorize()
Fixed bug in unique() handled signed zeros inconsistently: for some inputs 0.0 and -0.0 were treated as equal and for some inputs as different. Now they are treated as equal for all inputs (GH21866)
unique()
Bug in DataFrame.agg(), DataFrame.transform() and DataFrame.apply() where, when supplied with a list of functions and axis=1 (e.g. df.apply(['sum', 'mean'], axis=1)), a TypeError was wrongly raised. For all three methods such calculation are now done correctly. (GH16679).
DataFrame.agg()
DataFrame.transform()
DataFrame.apply()
df.apply(['sum', 'mean'], axis=1)
Bug in Series comparison against datetime-like scalars and arrays (GH22074)
Bug in DataFrame multiplication between boolean dtype and integer returning object dtype instead of integer dtype (GH22047, GH22163)
Bug in DataFrame.apply() where, when supplied with a string argument and additional positional or keyword arguments (e.g. df.apply('sum', min_count=1)), a TypeError was wrongly raised (GH22376)
df.apply('sum', min_count=1)
Bug in DataFrame.astype() to extension dtype may raise AttributeError (GH22578)
Bug in DataFrame with timedelta64[ns] dtype arithmetic operations with ndarray with integer dtype incorrectly treating the narray as timedelta64[ns] dtype (GH23114)
ndarray
Bug in Series.rpow() with object dtype NaN for 1 ** NA instead of 1 (GH22922).
Series.rpow()
1 ** NA
1
Series.agg() can now handle numpy NaN-aware methods like numpy.nansum() (GH19629)
Series.agg()
numpy.nansum()
Bug in Series.rank() and DataFrame.rank() when pct=True and more than 224 rows are present resulted in percentages greater than 1.0 (GH18271)
Series.rank()
DataFrame.rank()
pct=True
Calls such as DataFrame.round() with a non-unique CategoricalIndex() now return expected data. Previously, data would be improperly duplicated (GH21809).
DataFrame.round()
CategoricalIndex()
Added log10, floor and ceil to the list of supported functions in DataFrame.eval() (GH24139, GH24353)
log10
floor
ceil
DataFrame.eval()
Logical operations &, |, ^ between Series and Index will no longer raise ValueError (GH22092)
&, |, ^
Checking PEP 3141 numbers in is_scalar() function returns True (GH22903)
is_scalar()
Reduction methods like Series.sum() now accept the default value of keepdims=False when called from a NumPy ufunc, rather than raising a TypeError. Full support for keepdims has not been implemented (GH24356).
Series.sum()
keepdims=False
keepdims
Bug in DataFrame.combine_first() in which column types were unexpectedly converted to float (GH20699)
DataFrame.combine_first()
Bug in DataFrame.clip() in which column types are not preserved and casted to float (GH24162)
DataFrame.clip()
Bug in DataFrame.clip() when order of columns of dataframes doesn’t match, result observed is wrong in numeric values (GH20911)
Bug in DataFrame.astype() where converting to an extension dtype when duplicate column names are present causes a RecursionError (GH24704)
RecursionError
Bug in Index.str.partition() was not nan-safe (GH23558).
Index.str.partition()
Bug in Index.str.split() was not nan-safe (GH23677).
Index.str.split()
Bug Series.str.contains() not respecting the na argument for a Categorical dtype Series (GH22158)
Series.str.contains()
na
Bug in Index.str.cat() when the result contained only NaN (GH24044)
Index.str.cat()
Bug in the IntervalIndex constructor where the closed parameter did not always override the inferred closed (GH19370)
Bug in the IntervalIndex repr where a trailing comma was missing after the list of intervals (GH20611)
Bug in Interval where scalar arithmetic operations did not retain the closed value (GH22313)
Bug in IntervalIndex where indexing with datetime-like values raised a KeyError (GH20636)
Bug in IntervalTree where data containing NaN triggered a warning and resulted in incorrect indexing queries with IntervalIndex (GH23352)
IntervalTree
Bug in DataFrame.ne() fails if columns contain column name “dtype” (GH22383)
DataFrame.ne()
The traceback from a KeyError when asking .loc for a single missing label is now shorter and more clear (GH21557)
.loc
PeriodIndex now emits a KeyError when a malformed string is looked up, which is consistent with the behavior of DatetimeIndex (GH22803)
When .ix is asked for a missing integer label in a MultiIndex with a first level of integer type, it now raises a KeyError, consistently with the case of a flat Int64Index, rather than falling back to positional indexing (GH21593)
.ix
Bug in Index.reindex() when reindexing a tz-naive and tz-aware DatetimeIndex (GH8306)
Index.reindex()
Bug in Series.reindex() when reindexing an empty series with a datetime64[ns, tz] dtype (GH20869)
Series.reindex()
Bug in DataFrame when setting values with .loc and a timezone aware DatetimeIndex (GH11365)
DataFrame.__getitem__ now accepts dictionaries and dictionary keys as list-likes of labels, consistently with Series.__getitem__ (GH21294)
DataFrame.__getitem__
Series.__getitem__
Fixed DataFrame[np.nan] when columns are non-unique (GH21428)
DataFrame[np.nan]
Bug when indexing DatetimeIndex with nanosecond resolution dates and timezones (GH11679)
Bug where indexing with a Numpy array containing negative values would mutate the indexer (GH21867)
Bug where mixed indexes wouldn’t allow integers for .at (GH19860)
.at
Float64Index.get_loc now raises KeyError when boolean key passed. (GH19087)
Float64Index.get_loc
Bug in DataFrame.loc() when indexing with an IntervalIndex (GH19977)
Index no longer mangles None, NaN and NaT, i.e. they are treated as three different keys. However, for numeric Index all three are still coerced to a NaN (GH22332)
Bug in scalar in Index if scalar is a float while the Index is of integer dtype (GH22085)
scalar in Index
Bug in MultiIndex.set_levels() when levels value is not subscriptable (GH23273)
MultiIndex.set_levels()
Bug where setting a timedelta column by Index causes it to be casted to double, and therefore lose precision (GH23511)
Bug in Index.union() and Index.intersection() where name of the Index of the result was not computed correctly for certain cases (GH9943, GH9862)
Bug in Index slicing with boolean Index may raise TypeError (GH22533)
Bug in PeriodArray.__setitem__ when accepting slice and list-like value (GH23978)
PeriodArray.__setitem__
Bug in DatetimeIndex, TimedeltaIndex where indexing with Ellipsis would lose their freq attribute (GH21282)
Bug in iat where using it to assign an incompatible value would create a new column (GH23236)
iat
Bug in DataFrame.fillna() where a ValueError would raise when one column contained a datetime64[ns, tz] dtype (GH15522)
Bug in Series.hasnans() that could be incorrectly cached and return incorrect answers if null elements are introduced after an initial call (GH19700)
Series.isin() now treats all NaN-floats as equal also for np.object-dtype. This behavior is consistent with the behavior for float64 (GH22119)
Series.isin()
np.object
unique() no longer mangles NaN-floats and the NaT-object for np.object-dtype, i.e. NaT is no longer coerced to a NaN-value and is treated as a different entity. (GH22295)
DataFrame and Series now properly handle numpy masked arrays with hardened masks. Previously, constructing a DataFrame or Series from a masked array with a hard mask would create a pandas object containing the underlying value, rather than the expected NaN. (GH24574)
Bug in DataFrame constructor where dtype argument was not honored when handling numpy masked record arrays. (GH24874)
Bug in io.formats.style.Styler.applymap() where subset= with MultiIndex slice would reduce to Series (GH19861)
io.formats.style.Styler.applymap()
subset=
Removed compatibility for MultiIndex pickles prior to version 0.8.0; compatibility with MultiIndex pickles from version 0.13 forward is maintained (GH21654)
MultiIndex.get_loc_level() (and as a consequence, .loc on a Series or DataFrame with a MultiIndex index) will now raise a KeyError, rather than returning an empty slice, if asked a label which is present in the levels but is unused (GH22221)
MultiIndex.get_loc_level()
slice
levels
MultiIndex has gained the MultiIndex.from_frame(), it allows constructing a MultiIndex object from a DataFrame (GH22420)
MultiIndex.from_frame()
Fix TypeError in Python 3 when creating MultiIndex in which some levels have mixed types, e.g. when some labels are tuples (GH15457)
Bug in read_csv() in which a column specified with CategoricalDtype of boolean categories was not being correctly coerced from string values to booleans (GH20498)
CategoricalDtype
Bug in read_csv() in which unicode column names were not being properly recognized with Python 2.x (GH13253)
Bug in DataFrame.to_sql() when writing timezone aware data (datetime64[ns, tz] dtype) would raise a TypeError (GH9086)
Bug in DataFrame.to_sql() where a naive DatetimeIndex would be written as TIMESTAMP WITH TIMEZONE type in supported databases, e.g. PostgreSQL (GH23510)
TIMESTAMP WITH TIMEZONE
Bug in read_excel() when parse_cols is specified with an empty dataset (GH9208)
parse_cols
read_html() no longer ignores all-whitespace <tr> within <thead> when considering the skiprows and header arguments. Previously, users had to decrease their header and skiprows values on such tables to work around the issue. (GH21641)
<tr>
<thead>
skiprows
read_excel() will correctly show the deprecation warning for previously deprecated sheetname (GH17994)
sheetname
read_csv() and read_table() will throw UnicodeError and not coredump on badly encoded strings (GH22748)
read_table()
UnicodeError
read_csv() will correctly parse timezone-aware datetimes (GH22256)
Bug in read_csv() in which memory management was prematurely optimized for the C engine when the data was being read in chunks (GH23509)
Bug in read_csv() in unnamed columns were being improperly identified when extracting a multi-index (GH23687)
read_sas() will parse numbers in sas7bdat-files that have width less than 8 bytes correctly. (GH21616)
read_sas()
read_sas() will correctly parse sas7bdat files with many columns (GH22628)
read_sas() will correctly parse sas7bdat files with data page types having also bit 7 set (so page type is 128 + 256 = 384) (GH16615)
Bug in read_sas() in which an incorrect error was raised on an invalid file format. (GH24548)
Bug in detect_client_encoding() where potential IOError goes unhandled when importing in a mod_wsgi process due to restricted access to stdout. (GH21552)
detect_client_encoding()
IOError
Bug in DataFrame.to_html() with index=False misses truncation indicators (…) on truncated DataFrame (GH15019, GH22783)
index=False
Bug in DataFrame.to_html() with index=False when both columns and row index are MultiIndex (GH22579)
Bug in DataFrame.to_html() with index_names=False displaying index name (GH22747)
index_names=False
Bug in DataFrame.to_html() with header=False not displaying row index names (GH23788)
header=False
Bug in DataFrame.to_html() with sparsify=False that caused it to raise TypeError (GH22887)
sparsify=False
Bug in DataFrame.to_string() that broke column alignment when index=False and width of first column’s values is greater than the width of first column’s header (GH16839, GH13032)
Bug in DataFrame.to_string() that caused representations of DataFrame to not take up the whole window (GH22984)
Bug in DataFrame.to_csv() where a single level MultiIndex incorrectly wrote a tuple. Now just the value of the index is written (GH19589).
HDFStore will raise ValueError when the format kwarg is passed to the constructor (GH13291)
HDFStore
Bug in HDFStore.append() when appending a DataFrame with an empty string column and min_itemsize < 8 (GH12242)
HDFStore.append()
min_itemsize
Bug in read_csv() in which memory leaks occurred in the C engine when parsing NaN values due to insufficient cleanup on completion or error (GH21353)
Bug in read_csv() in which incorrect error messages were being raised when skipfooter was passed in along with nrows, iterator, or chunksize (GH23711)
skipfooter
nrows
iterator
chunksize
Bug in read_csv() in which MultiIndex index names were being improperly handled in the cases when they were not provided (GH23484)
Bug in read_csv() in which unnecessary warnings were being raised when the dialect’s values conflicted with the default arguments (GH23761)
Bug in read_html() in which the error message was not displaying the valid flavors when an invalid one was provided (GH23549)
Bug in read_excel() in which extraneous header names were extracted, even though none were specified (GH11733)
Bug in read_excel() in which column names were not being properly converted to string sometimes in Python 2.x (GH23874)
Bug in read_excel() in which index_col=None was not being respected and parsing index columns anyway (GH18792, GH20480)
index_col=None
Bug in read_excel() in which usecols was not being validated for proper column names when passed in as a string (GH20480)
Bug in DataFrame.to_dict() when the resulting dict contains non-Python scalars in the case of numeric data (GH23753)
DataFrame.to_string(), DataFrame.to_html(), DataFrame.to_latex() will correctly format output when a string is passed as the float_format argument (GH21625, GH22270)
DataFrame.to_latex()
float_format
Bug in read_csv() that caused it to raise OverflowError when trying to use ‘inf’ as na_value with integer index column (GH17128)
na_value
Bug in read_csv() that caused the C engine on Python 3.6+ on Windows to improperly read CSV filenames with accented or special characters (GH15086)
Bug in read_fwf() in which the compression type of a file was not being properly inferred (GH22199)
Bug in pandas.io.json.json_normalize() that caused it to raise TypeError when two consecutive elements of record_path are dicts (GH22706)
pandas.io.json.json_normalize()
record_path
Bug in DataFrame.to_stata(), pandas.io.stata.StataWriter and pandas.io.stata.StataWriter117 where a exception would leave a partially written and invalid dta file (GH23573)
pandas.io.stata.StataWriter
Bug in DataFrame.to_stata() and pandas.io.stata.StataWriter117 that produced invalid files when using strLs with non-ASCII characters (GH23573)
Bug in HDFStore that caused it to raise ValueError when reading a Dataframe in Python 3 from fixed format written in Python 2 (GH24510)
Bug in DataFrame.to_string() and more generally in the floating repr formatter. Zeros were not trimmed if inf was present in a columns while it was the case with NA values. Zeros are now trimmed as in the presence of NA (GH24861).
repr
inf
Bug in the repr when truncating the number of columns and having a wide last column (GH24849).
Bug in DataFrame.plot.scatter() and DataFrame.plot.hexbin() caused x-axis label and ticklabels to disappear when colorbar was on in IPython inline backend (GH10611, GH10678, and GH20455)
DataFrame.plot.scatter()
DataFrame.plot.hexbin()
Bug in plotting a Series with datetimes using matplotlib.axes.Axes.scatter() (GH22039)
matplotlib.axes.Axes.scatter()
Bug in DataFrame.plot.bar() caused bars to use multiple colors instead of a single one (GH20585)
DataFrame.plot.bar()
Bug in validating color parameter caused extra color to be appended to the given color array. This happened to multiple plotting functions using matplotlib. (GH20726)
Bug in pandas.core.window.Rolling.min() and pandas.core.window.Rolling.max() with closed='left', a datetime-like index and only one entry in the series leading to segfault (GH24718)
pandas.core.window.Rolling.min()
pandas.core.window.Rolling.max()
closed='left'
Bug in pandas.core.groupby.GroupBy.first() and pandas.core.groupby.GroupBy.last() with as_index=False leading to the loss of timezone information (GH15884)
pandas.core.groupby.GroupBy.first()
pandas.core.groupby.GroupBy.last()
as_index=False
Bug in DateFrame.resample() when downsampling across a DST boundary (GH8531)
DateFrame.resample()
Bug in date anchoring for DateFrame.resample() with offset Day when n > 1 (GH24127)
Bug where ValueError is wrongly raised when calling count() method of a SeriesGroupBy when the grouping variable only contains NaNs and numpy version < 1.13 (GH21956).
count()
SeriesGroupBy
Multiple bugs in pandas.core.window.Rolling.min() with closed='left' and a datetime-like index leading to incorrect results and also segfault. (GH21704)
Bug in pandas.core.resample.Resampler.apply() when passing positional arguments to applied func (GH14615).
pandas.core.resample.Resampler.apply()
Bug in Series.resample() when passing numpy.timedelta64 to loffset kwarg (GH7687).
numpy.timedelta64
loffset
Bug in pandas.core.resample.Resampler.asfreq() when frequency of TimedeltaIndex is a subperiod of a new frequency (GH13022).
pandas.core.resample.Resampler.asfreq()
Bug in pandas.core.groupby.SeriesGroupBy.mean() when values were integral but could not fit inside of int64, overflowing instead. (GH22487)
pandas.core.groupby.SeriesGroupBy.mean()
pandas.core.groupby.RollingGroupby.agg() and pandas.core.groupby.ExpandingGroupby.agg() now support multiple aggregation functions as parameters (GH15072)
pandas.core.groupby.RollingGroupby.agg()
pandas.core.groupby.ExpandingGroupby.agg()
Bug in DataFrame.resample() and Series.resample() when resampling by a weekly offset ('W') across a DST transition (GH9119, GH21459)
'W'
Bug in DataFrame.expanding() in which the axis argument was not being respected during aggregations (GH23372)
DataFrame.expanding()
Bug in pandas.core.groupby.GroupBy.transform() which caused missing values when the input function can accept a DataFrame but renames it (GH23455).
pandas.core.groupby.GroupBy.transform()
Bug in pandas.core.groupby.GroupBy.nth() where column order was not always preserved (GH20760)
pandas.core.groupby.GroupBy.nth()
Bug in pandas.core.groupby.GroupBy.rank() with method='dense' and pct=True when a group has only one member would raise a ZeroDivisionError (GH23666).
method='dense'
ZeroDivisionError
Calling pandas.core.groupby.GroupBy.rank() with empty groups and pct=True was raising a ZeroDivisionError (GH22519)
Bug in DataFrame.resample() when resampling NaT in TimeDeltaIndex (GH13223).
TimeDeltaIndex
Bug in DataFrame.groupby() did not respect the observed argument when selecting a column and instead always used observed=False (GH23970)
observed
observed=False
Bug in pandas.core.groupby.SeriesGroupBy.pct_change() or pandas.core.groupby.DataFrameGroupBy.pct_change() would previously work across groups when calculating the percent change, where it now correctly works per group (GH21200, GH21235).
pandas.core.groupby.SeriesGroupBy.pct_change()
pandas.core.groupby.DataFrameGroupBy.pct_change()
Bug preventing hash table creation with very large number (2^32) of rows (GH22805)
Bug in groupby when grouping on categorical causes ValueError and incorrect grouping if observed=True and nan is present in categorical column (GH24740, GH21151).
observed=True
nan
Bug in pandas.concat() when joining resampled DataFrames with timezone aware index (GH13783)
Bug in pandas.concat() when joining only Series the names argument of concat is no longer ignored (GH23490)
names
concat
Bug in Series.combine_first() with datetime64[ns, tz] dtype which would return tz-naive result (GH21469)
Bug in Series.where() and DataFrame.where() with datetime64[ns, tz] dtype (GH21546)
DataFrame.where()
Bug in DataFrame.where() with an empty DataFrame and empty cond having non-bool dtype (GH21947)
cond
Bug in Series.mask() and DataFrame.mask() with list conditionals (GH21891)
Series.mask()
DataFrame.mask()
list
Bug in DataFrame.replace() raises RecursionError when converting OutOfBounds datetime64[ns, tz] (GH20380)
DataFrame.replace()
pandas.core.groupby.GroupBy.rank() now raises a ValueError when an invalid value is passed for argument na_option (GH22124)
na_option
Bug in get_dummies() with Unicode attributes in Python 2 (GH22084)
Bug in DataFrame.replace() raises RecursionError when replacing empty lists (GH22083)
Bug in Series.replace() and DataFrame.replace() when dict is used as the to_replace value and one key in the dict is another key’s value, the results were inconsistent between using integer key and using string key (GH20656)
to_replace
Bug in DataFrame.drop_duplicates() for empty DataFrame which incorrectly raises an error (GH20516)
DataFrame.drop_duplicates()
Bug in pandas.wide_to_long() when a string is passed to the stubnames argument and a column name is a substring of that stubname (GH22468)
pandas.wide_to_long()
Bug in merge() when merging datetime64[ns, tz] data that contained a DST transition (GH18885)
Bug in merge_asof() when merging on float values within defined tolerance (GH22981)
merge_asof()
Bug in pandas.concat() when concatenating a multicolumn DataFrame with tz-aware data against a DataFrame with a different number of columns (GH22796)
Bug in merge_asof() where confusing error message raised when attempting to merge with missing values (GH23189)
Bug in DataFrame.nsmallest() and DataFrame.nlargest() for dataframes that have a MultiIndex for columns (GH23033).
Bug in pandas.melt() when passing column names that are not present in DataFrame (GH23575)
pandas.melt()
Bug in DataFrame.append() with a Series with a dateutil timezone would raise a TypeError (GH23682)
DataFrame.append()
Bug in Series construction when passing no data and dtype=str (GH22477)
Bug in cut() with bins as an overlapping IntervalIndex where multiple bins were returned per item instead of raising a ValueError (GH23980)
bins
Bug in pandas.concat() when joining Series datetimetz with Series category would lose timezone (GH23816)
Bug in DataFrame.join() when joining on partial MultiIndex would drop names (GH20452).
DataFrame.nlargest() and DataFrame.nsmallest() now returns the correct n values when keep != ‘all’ also when tied on the first columns (GH22752)
Constructing a DataFrame with an index argument that wasn’t already an instance of Index was broken (GH22227).
Bug in DataFrame prevented list subclasses to be used to construction (GH21226)
Bug in DataFrame.unstack() and DataFrame.pivot_table() returning a misleading error message when the resulting DataFrame has more elements than int32 can handle. Now, the error message is improved, pointing towards the actual problem (GH20601)
DataFrame.pivot_table()
Bug in DataFrame.unstack() where a ValueError was raised when unstacking timezone aware values (GH18338)
Bug in DataFrame.stack() where timezone aware values were converted to timezone naive values (GH19420)
Bug in merge_asof() where a TypeError was raised when by_col were timezone aware values (GH21184)
by_col
Bug showing an incorrect shape when throwing error during DataFrame construction. (GH20742)
Updating a boolean, datetime, or timedelta column to be Sparse now works (GH22367)
Bug in Series.to_sparse() with Series already holding sparse data not constructing properly (GH22389)
Series.to_sparse()
Providing a sparse_index to the SparseArray constructor no longer defaults the na-value to np.nan for all dtypes. The correct na_value for data.dtype is now used.
sparse_index
data.dtype
Bug in SparseArray.nbytes under-reporting its memory usage by not including the size of its sparse index.
SparseArray.nbytes
Improved performance of Series.shift() for non-NA fill_value, as values are no longer converted to a dense array.
Bug in DataFrame.groupby not including fill_value in the groups for non-NA fill_value when grouping by a sparse column (GH5078)
DataFrame.groupby
Bug in unary inversion operator (~) on a SparseSeries with boolean values. The performance of this has also been improved (GH22835)
~
Bug in SparseArary.unique() not returning the unique values (GH19595)
SparseArary.unique()
Bug in SparseArray.nonzero() and SparseDataFrame.dropna() returning shifted/incorrect results (GH21172)
SparseArray.nonzero()
SparseDataFrame.dropna()
Bug in DataFrame.apply() where dtypes would lose sparseness (GH23744)
Bug in concat() when concatenating a list of Series with all-sparse values changing the fill_value and converting to a dense Series (GH24371)
background_gradient() now takes a text_color_threshold parameter to automatically lighten the text color based on the luminance of the background color. This improves readability with dark background colors without the need to limit the background colormap range. (GH21258)
background_gradient()
text_color_threshold
background_gradient() now also supports tablewise application (in addition to rowwise and columnwise) with axis=None (GH15204)
axis=None
bar() now also supports tablewise application (in addition to rowwise and columnwise) with axis=None and setting clipping range with vmin and vmax (GH21548 and GH21526). NaN values are also handled properly.
bar()
vmin
vmax
Building pandas for development now requires cython >= 0.28.2 (GH21688)
cython >= 0.28.2
Testing pandas now requires hypothesis>=3.58. You can find the Hypothesis docs here, and a pandas-specific introduction in the contributing guide. (GH22280)
hypothesis>=3.58
Building pandas on macOS now targets minimum macOS 10.9 if run on macOS 10.9 or above (GH23424)
Bug where C variables were declared with external linkage causing import errors if certain other C libraries were imported before pandas. (GH24113)
A total of 337 people contributed patches to this release. People with a “+” by their names contributed a patch for the first time.
AJ Dyka +
AJ Pryor, Ph.D +
Aaron Critchley
Adam Hooper
Adam J. Stewart
Adam Kim
Adam Klimont +
Addison Lynch +
Alan Hogue +
Alex Radu +
Alex Rychyk
Alex Strick van Linschoten +
Alex Volkov +
Alexander Buchkovsky
Alexander Hess +
Alexander Ponomaroff +
Allison Browne +
Aly Sivji
Andrew
Andrew Gross +
Andrew Spott +
Andy +
Aniket uttam +
Anjali2019 +
Anjana S +
Antti Kaihola +
Anudeep Tubati +
Arjun Sharma +
Armin Varshokar
Artem Bogachev
ArtinSarraf +
Barry Fitzgerald +
Bart Aelterman +
Ben James +
Ben Nelson +
Benjamin Grove +
Benjamin Rowell +
Benoit Paquet +
Boris Lau +
Brett Naul
Brian Choi +
C.A.M. Gerlach +
Carl Johan +
Chalmer Lowe
Chang She
Charles David +
Cheuk Ting Ho
Chris
Chris Roberts +
Christopher Whelan
Chu Qing Hao +
Da Cheezy Mobsta +
Damini Satya
Daniel Himmelstein
Daniel Saxton +
Darcy Meyer +
DataOmbudsman
David Arcos
David Krych
Dean Langsam +
Diego Argueta +
Diego Torres +
Dobatymo +
Doug Latornell +
Dr. Irv
Dylan Dmitri Gray +
Eric Boxer +
Eric Chea
Erik +
Erik Nilsson +
Fabian Haase +
Fabian Retkowski
Fabien Aulaire +
Fakabbir Amin +
Fei Phoon +
Fernando Margueirat +
Florian Müller +
Fábio Rosado +
Gabe Fernando
Gabriel Reid +
Giftlin Rajaiah
Gioia Ballin +
Gjelt
Gosuke Shibahara +
Graham Inggs
Guillaume Gay
Guillaume Lemaitre +
Hannah Ferchland
Haochen Wu
Hubert +
HubertKl +
HyunTruth +
Iain Barr
Ignacio Vergara Kausel +
Irv Lustig +
IsvenC +
Jacopo Rota
Jakob Jarmar +
James Bourbeau +
James Myatt +
James Winegar +
Jan Rudolph
Jared Groves +
Jason Kiley +
Javad Noorbakhsh +
Jay Offerdahl +
Jeff Reback
Jeongmin Yu +
Jeremy Schendel
Jerod Estapa +
Jesper Dramsch +
Jim Jeon +
Joe Jevnik
Joel Nothman
Joel Ostblom +
Jordi Contestí
Jorge López Fueyo +
Joris Van den Bossche
Jose Quinones +
Jose Rivera-Rubio +
Josh
Jun +
Justin Zheng +
Kaiqi Dong +
Kalyan Gokhale
Kang Yoosam +
Karl Dunkle Werner +
Karmanya Aggarwal +
Kevin Markham +
Kevin Sheppard
Kimi Li +
Koustav Samaddar +
Krishna +
Kristian Holsheimer +
Ksenia Gueletina +
Kyle Prestel +
LJ +
LeakedMemory +
Li Jin +
Licht Takeuchi
Luca Donini +
Luciano Viola +
Mak Sze Chun +
Marc Garcia
Marius Potgieter +
Mark Sikora +
Markus Meier +
Marlene Silva Marchena +
Martin Babka +
MatanCohe +
Mateusz Woś +
Mathew Topper +
Matt Boggess +
Matt Cooper +
Matt Williams +
Matthew Gilbert
Matthew Roeschke
Max Kanter
Michael Odintsov
Michael Silverstein +
Michael-J-Ward +
Mickaël Schoentgen +
Miguel Sánchez de León Peque +
Ming Li
Mitar
Mitch Negus
Monson Shao +
Moonsoo Kim +
Mortada Mehyar
Myles Braithwaite
Nehil Jain +
Nicholas Musolino +
Nicolas Dickreuter +
Nikhil Kumar Mengani +
Nikoleta Glynatsi +
Ondrej Kokes
Pablo Ambrosio +
Pamela Wu +
Parfait G +
Patrick Park +
Paul
Paul Ganssle
Paul Reidy
Paul van Mulbregt +
Phillip Cloud
Pietro Battiston
Piyush Aggarwal +
Prabakaran Kumaresshan +
Pulkit Maloo
Pyry Kovanen
Rajib Mitra +
Redonnet Louis +
Rhys Parry +
Rick +
Robin
Roei.r +
RomainSa +
Roman Imankulov +
Roman Yurchak +
Ruijing Li +
Ryan +
Ryan Nazareth +
Rüdiger Busche +
SEUNG HOON, SHIN +
Sandrine Pataut +
Sangwoong Yoon
Santosh Kumar +
Saurav Chakravorty +
Scott McAllister +
Sean Chan +
Shadi Akiki +
Shengpu Tang +
Shirish Kadam +
Simon Hawkins +
Simon Riddell +
Simone Basso
Sinhrks
Soyoun(Rose) Kim +
Srinivas Reddy Thatiparthy (శ్రీనివాస్ రెడ్డి తాటిపర్తి) +
Stefaan Lippens +
Stefano Cianciulli
Stefano Miccoli +
Stephen Childs
Stephen Pascoe
Steve Baker +
Steve Cook +
Steve Dower +
Stéphan Taljaard +
Sumin Byeon +
Sören +
Tamas Nagy +
Tanya Jain +
Tarbo Fukazawa
Thein Oo +
Thiago Cordeiro da Fonseca +
Thierry Moisan
Thiviyan Thanapalasingam +
Thomas Lentali +
Tim D. Smith +
Tim Swast
Tom Augspurger
Tomasz Kluczkowski +
Tony Tao +
Triple0 +
Troels Nielsen +
Tuhin Mahmud +
Tyler Reddy +
Uddeshya Singh
Uwe L. Korn +
Vadym Barda +
Varad Gunjal +
Victor Maryama +
Victor Villas
Vincent La
Vitória Helena +
Vu Le
Vyom Jain +
Weiwen Gu +
Wenhuan
Wes Turner
Wil Tan +
William Ayd
Yeojin Kim +
Yitzhak Andrade +
Yuecheng Wu +
Yuliya Dovzhenko +
Yury Bayda +
Zac Hatfield-Dodds +
aberres +
aeltanawy +
ailchau +
alimcmaster1
alphaCTzo7G +
amphy +
araraonline +
azure-pipelines[bot] +
benarthur91 +
bk521234 +
cgangwar11 +
chris-b1
cxl923cc +
dahlbaek +
dannyhyunkim +
darke-spirits +
david-liu-brattle-1
davidmvalente +
deflatSOCO
doosik_bae +
dylanchase +
eduardo naufel schettino +
euri10 +
evangelineliu +
fengyqf +
fjdiod
fl4p +
fleimgruber +
gfyoung
h-vetinari
harisbal +
henriqueribeiro +
himanshu awasthi
hongshaoyang +
igorfassen +
jalazbe +
jbrockmendel
jh-wu +
justinchan23 +
louispotok
marcosrullan +
miker985
nicolab100 +
nprad
nsuresh +
ottiP
pajachiet +
raguiar2 +
ratijas +
realead +
robbuckley +
saurav2608 +
sideeye +
ssikdar1
svenharris +
syutbai +
testvinder +
thatneat
tmnhat2001
tomascassidy +
tomneep
topper-123
vkk800 +
winlu +
ym-pett +
yrhooke +
ywpark1 +
zertrin
zhezherun +