Version 0.19.0 (October 2, 2016)¶
This is a major release from 0.18.1 and includes number of API changes, several new features, enhancements, and performance improvements along with a large number of bug fixes. We recommend that all users upgrade to this version.
Highlights include:
merge_asof()for asof-style time-series joining, see here.rolling()is now time-series aware, see hereread_csv()now supports parsingCategoricaldata, see hereA function
union_categorical()has been added for combining categoricals, see herePeriodIndexnow has its ownperioddtype, and changed to be more consistent with otherIndexclasses. See hereSparse data structures gained enhanced support of
intandbooldtypes, see hereComparison operations with
Seriesno longer ignores the index, see here for an overview of the API changes.Introduction of a pandas development API for utility functions, see here.
Deprecation of
Panel4DandPanelND. We recommend to represent these types of n-dimensional data with the xarray package.Removal of the previously deprecated modules
pandas.io.data,pandas.io.wb,pandas.tools.rplot.
Warning
pandas >= 0.19.0 will no longer silence numpy ufunc warnings upon import, see here.
What’s new in v0.19.0
New features¶
Function merge_asof for asof-style time-series joining¶
A long-time requested feature has been added through the merge_asof() function, to
support asof style joining of time-series (GH1870, GH13695, GH13709, GH13902). Full documentation is
here.
The merge_asof() performs an asof merge, which is similar to a left-join
except that we match on nearest key rather than equal keys.
In [1]: left = pd.DataFrame({"a": [1, 5, 10], "left_val": ["a", "b", "c"]})
In [2]: right = pd.DataFrame({"a": [1, 2, 3, 6, 7], "right_val": [1, 2, 3, 6, 7]})
In [3]: left
Out[3]:
a left_val
0 1 a
1 5 b
2 10 c
[3 rows x 2 columns]
In [4]: right
Out[4]:
a right_val
0 1 1
1 2 2
2 3 3
3 6 6
4 7 7
[5 rows x 2 columns]
We typically want to match exactly when possible, and use the most recent value otherwise.
In [5]: pd.merge_asof(left, right, on="a")
Out[5]:
a left_val right_val
0 1 a 1
1 5 b 3
2 10 c 7
[3 rows x 3 columns]
We can also match rows ONLY with prior data, and not an exact match.
In [6]: pd.merge_asof(left, right, on="a", allow_exact_matches=False)
Out[6]:
a left_val right_val
0 1 a NaN
1 5 b 3.0
2 10 c 7.0
[3 rows x 3 columns]
In a typical time-series example, we have trades and quotes and we want to asof-join them.
This also illustrates using the by parameter to group data before merging.
In [7]: trades = pd.DataFrame(
...: {
...: "time": pd.to_datetime(
...: [
...: "20160525 13:30:00.023",
...: "20160525 13:30:00.038",
...: "20160525 13:30:00.048",
...: "20160525 13:30:00.048",
...: "20160525 13:30:00.048",
...: ]
...: ),
...: "ticker": ["MSFT", "MSFT", "GOOG", "GOOG", "AAPL"],
...: "price": [51.95, 51.95, 720.77, 720.92, 98.00],
...: "quantity": [75, 155, 100, 100, 100],
...: },
...: columns=["time", "ticker", "price", "quantity"],
...: )
...:
In [8]: quotes = pd.DataFrame(
...: {
...: "time": pd.to_datetime(
...: [
...: "20160525 13:30:00.023",
...: "20160525 13:30:00.023",
...: "20160525 13:30:00.030",
...: "20160525 13:30:00.041",
...: "20160525 13:30:00.048",
...: "20160525 13:30:00.049",
...: "20160525 13:30:00.072",
...: "20160525 13:30:00.075",
...: ]
...: ),
...: "ticker": ["GOOG", "MSFT", "MSFT", "MSFT", "GOOG", "AAPL", "GOOG", "MSFT"],
...: "bid": [720.50, 51.95, 51.97, 51.99, 720.50, 97.99, 720.50, 52.01],
...: "ask": [720.93, 51.96, 51.98, 52.00, 720.93, 98.01, 720.88, 52.03],
...: },
...: columns=["time", "ticker", "bid", "ask"],
...: )
...:
In [9]: trades
Out[9]:
time ticker price quantity
0 2016-05-25 13:30:00.023 MSFT 51.95 75
1 2016-05-25 13:30:00.038 MSFT 51.95 155
2 2016-05-25 13:30:00.048 GOOG 720.77 100
3 2016-05-25 13:30:00.048 GOOG 720.92 100
4 2016-05-25 13:30:00.048 AAPL 98.00 100
[5 rows x 4 columns]
In [10]: quotes
Out[10]:
time ticker bid ask
0 2016-05-25 13:30:00.023 GOOG 720.50 720.93
1 2016-05-25 13:30:00.023 MSFT 51.95 51.96
2 2016-05-25 13:30:00.030 MSFT 51.97 51.98
3 2016-05-25 13:30:00.041 MSFT 51.99 52.00
4 2016-05-25 13:30:00.048 GOOG 720.50 720.93
5 2016-05-25 13:30:00.049 AAPL 97.99 98.01
6 2016-05-25 13:30:00.072 GOOG 720.50 720.88
7 2016-05-25 13:30:00.075 MSFT 52.01 52.03
[8 rows x 4 columns]
An asof merge joins on the on, typically a datetimelike field, which is ordered, and
in this case we are using a grouper in the by field. This is like a left-outer join, except
that forward filling happens automatically taking the most recent non-NaN value.
In [11]: pd.merge_asof(trades, quotes, on="time", by="ticker")
Out[11]:
time ticker price quantity bid ask
0 2016-05-25 13:30:00.023 MSFT 51.95 75 51.95 51.96
1 2016-05-25 13:30:00.038 MSFT 51.95 155 51.97 51.98
2 2016-05-25 13:30:00.048 GOOG 720.77 100 720.50 720.93
3 2016-05-25 13:30:00.048 GOOG 720.92 100 720.50 720.93
4 2016-05-25 13:30:00.048 AAPL 98.00 100 NaN NaN
[5 rows x 6 columns]
This returns a merged DataFrame with the entries in the same order as the original left
passed DataFrame (trades in this case), with the fields of the quotes merged.
Method .rolling() is now time-series aware¶
.rolling() objects are now time-series aware and can accept a time-series offset (or convertible) for the window argument (GH13327, GH12995).
See the full documentation here.
In [12]: dft = pd.DataFrame(
....: {"B": [0, 1, 2, np.nan, 4]},
....: index=pd.date_range("20130101 09:00:00", periods=5, freq="s"),
....: )
....:
In [13]: dft
Out[13]:
B
2013-01-01 09:00:00 0.0
2013-01-01 09:00:01 1.0
2013-01-01 09:00:02 2.0
2013-01-01 09:00:03 NaN
2013-01-01 09:00:04 4.0
[5 rows x 1 columns]
This is a regular frequency index. Using an integer window parameter works to roll along the window frequency.
In [14]: dft.rolling(2).sum()
Out[14]:
B
2013-01-01 09:00:00 NaN
2013-01-01 09:00:01 1.0
2013-01-01 09:00:02 3.0
2013-01-01 09:00:03 NaN
2013-01-01 09:00:04 NaN
[5 rows x 1 columns]
In [15]: dft.rolling(2, min_periods=1).sum()
Out[15]:
B
2013-01-01 09:00:00 0.0
2013-01-01 09:00:01 1.0
2013-01-01 09:00:02 3.0
2013-01-01 09:00:03 2.0
2013-01-01 09:00:04 4.0
[5 rows x 1 columns]
Specifying an offset allows a more intuitive specification of the rolling frequency.
In [16]: dft.rolling("2s").sum()
Out[16]:
B
2013-01-01 09:00:00 0.0
2013-01-01 09:00:01 1.0
2013-01-01 09:00:02 3.0
2013-01-01 09:00:03 2.0
2013-01-01 09:00:04 4.0
[5 rows x 1 columns]
Using a non-regular, but still monotonic index, rolling with an integer window does not impart any special calculation.
In [17]: dft = pd.DataFrame(
....: {"B": [0, 1, 2, np.nan, 4]},
....: index=pd.Index(
....: [
....: pd.Timestamp("20130101 09:00:00"),
....: pd.Timestamp("20130101 09:00:02"),
....: pd.Timestamp("20130101 09:00:03"),
....: pd.Timestamp("20130101 09:00:05"),
....: pd.Timestamp("20130101 09:00:06"),
....: ],
....: name="foo",
....: ),
....: )
....:
In [18]: dft
Out[18]:
B
foo
2013-01-01 09:00:00 0.0
2013-01-01 09:00:02 1.0
2013-01-01 09:00:03 2.0
2013-01-01 09:00:05 NaN
2013-01-01 09:00:06 4.0
[5 rows x 1 columns]
In [19]: dft.rolling(2).sum()
Out[19]:
B
foo
2013-01-01 09:00:00 NaN
2013-01-01 09:00:02 1.0
2013-01-01 09:00:03 3.0
2013-01-01 09:00:05 NaN
2013-01-01 09:00:06 NaN
[5 rows x 1 columns]
Using the time-specification generates variable windows for this sparse data.
In [20]: dft.rolling("2s").sum()
Out[20]:
B
foo
2013-01-01 09:00:00 0.0
2013-01-01 09:00:02 1.0
2013-01-01 09:00:03 3.0
2013-01-01 09:00:05 NaN
2013-01-01 09:00:06 4.0
[5 rows x 1 columns]
Furthermore, we now allow an optional on parameter to specify a column (rather than the
default of the index) in a DataFrame.
In [21]: dft = dft.reset_index()
In [22]: dft
Out[22]:
foo B
0 2013-01-01 09:00:00 0.0
1 2013-01-01 09:00:02 1.0
2 2013-01-01 09:00:03 2.0
3 2013-01-01 09:00:05 NaN
4 2013-01-01 09:00:06 4.0
[5 rows x 2 columns]
In [23]: dft.rolling("2s", on="foo").sum()
Out[23]:
foo B
0 2013-01-01 09:00:00 0.0
1 2013-01-01 09:00:02 1.0
2 2013-01-01 09:00:03 3.0
3 2013-01-01 09:00:05 NaN
4 2013-01-01 09:00:06 4.0
[5 rows x 2 columns]
Method read_csv has improved support for duplicate column names¶
Duplicate column names are now supported in read_csv() whether
they are in the file or passed in as the names parameter (GH7160, GH9424)
In [24]: data = "0,1,2\n3,4,5"
In [25]: names = ["a", "b", "a"]
Previous behavior:
In [2]: pd.read_csv(StringIO(data), names=names)
Out[2]:
a b a
0 2 1 2
1 5 4 5
The first a column contained the same data as the second a column, when it should have
contained the values [0, 3].
New behavior:
In [26]: pd.read_csv(StringIO(data), names=names)
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-26-a095135d9435> in <module>
----> 1 pd.read_csv(StringIO(data), names=names)
/pandas/pandas/io/parsers.py in read_csv(filepath_or_buffer, sep, delimiter, header, names, index_col, usecols, squeeze, prefix, mangle_dupe_cols, dtype, engine, converters, true_values, false_values, skipinitialspace, skiprows, skipfooter, nrows, na_values, keep_default_na, na_filter, verbose, skip_blank_lines, parse_dates, infer_datetime_format, keep_date_col, date_parser, dayfirst, cache_dates, iterator, chunksize, compression, thousands, decimal, lineterminator, quotechar, quoting, doublequote, escapechar, comment, encoding, dialect, error_bad_lines, warn_bad_lines, delim_whitespace, low_memory, memory_map, float_precision, storage_options)
608 kwds.update(kwds_defaults)
609
--> 610 return _read(filepath_or_buffer, kwds)
611
612
/pandas/pandas/io/parsers.py in _read(filepath_or_buffer, kwds)
457
458 # Check for duplicates in names.
--> 459 _validate_names(kwds.get("names", None))
460
461 # Create the parser.
/pandas/pandas/io/parsers.py in _validate_names(names)
438 if names is not None:
439 if len(names) != len(set(names)):
--> 440 raise ValueError("Duplicate names are not allowed.")
441 if not (
442 is_list_like(names, allow_sets=False) or isinstance(names, abc.KeysView)
ValueError: Duplicate names are not allowed.
Method read_csv supports parsing Categorical directly¶
The read_csv() function now supports parsing a Categorical column when
specified as a dtype (GH10153). Depending on the structure of the data,
this can result in a faster parse time and lower memory usage compared to
converting to Categorical after parsing. See the io docs here.
In [27]: data = """
....: col1,col2,col3
....: a,b,1
....: a,b,2
....: c,d,3
....: """
....:
In [28]: pd.read_csv(StringIO(data))
Out[28]:
col1 col2 col3
0 a b 1
1 a b 2
2 c d 3
[3 rows x 3 columns]
In [29]: pd.read_csv(StringIO(data)).dtypes
Out[29]:
col1 object
col2 object
col3 int64
Length: 3, dtype: object
In [30]: pd.read_csv(StringIO(data), dtype="category").dtypes
Out[30]:
col1 category
col2 category
col3 category
Length: 3, dtype: object
Individual columns can be parsed as a Categorical using a dict specification
In [31]: pd.read_csv(StringIO(data), dtype={"col1": "category"}).dtypes
Out[31]:
col1 category
col2 object
col3 int64
Length: 3, dtype: object
Note
The resulting categories will always be parsed as strings (object dtype).
If the categories are numeric they can be converted using the
to_numeric() function, or as appropriate, another converter
such as to_datetime().
In [32]: df = pd.read_csv(StringIO(data), dtype="category")
In [33]: df.dtypes
Out[33]:
col1 category
col2 category
col3 category
Length: 3, dtype: object
In [34]: df["col3"]
Out[34]:
0 1
1 2
2 3
Name: col3, Length: 3, dtype: category
Categories (3, object): ['1', '2', '3']
In [35]: df["col3"].cat.categories = pd.to_numeric(df["col3"].cat.categories)
In [36]: df["col3"]
Out[36]:
0 1
1 2
2 3
Name: col3, Length: 3, dtype: category
Categories (3, int64): [1, 2, 3]
Categorical concatenation¶
A function
union_categoricals()has been added for combining categoricals, see Unioning Categoricals (GH13361, GH13763, GH13846, GH14173)In [37]: from pandas.api.types import union_categoricals In [38]: a = pd.Categorical(["b", "c"]) In [39]: b = pd.Categorical(["a", "b"]) In [40]: union_categoricals([a, b]) Out[40]: ['b', 'c', 'a', 'b'] Categories (3, object): ['b', 'c', 'a']
concatandappendnow can concatcategorydtypes with differentcategoriesasobjectdtype (GH13524)In [41]: s1 = pd.Series(["a", "b"], dtype="category") In [42]: s2 = pd.Series(["b", "c"], dtype="category")
Previous behavior:
In [1]: pd.concat([s1, s2])
ValueError: incompatible categories in categorical concat
New behavior:
In [43]: pd.concat([s1, s2])
Out[43]:
0 a
1 b
0 b
1 c
Length: 4, dtype: object
Semi-month offsets¶
pandas has gained new frequency offsets, SemiMonthEnd (‘SM’) and SemiMonthBegin (‘SMS’).
These provide date offsets anchored (by default) to the 15th and end of month, and 15th and 1st of month respectively.
(GH1543)
In [44]: from pandas.tseries.offsets import SemiMonthEnd, SemiMonthBegin
SemiMonthEnd:
In [45]: pd.Timestamp("2016-01-01") + SemiMonthEnd()
Out[45]: Timestamp('2016-01-15 00:00:00')
In [46]: pd.date_range("2015-01-01", freq="SM", periods=4)
Out[46]: DatetimeIndex(['2015-01-15', '2015-01-31', '2015-02-15', '2015-02-28'], dtype='datetime64[ns]', freq='SM-15')
SemiMonthBegin:
In [47]: pd.Timestamp("2016-01-01") + SemiMonthBegin()
Out[47]: Timestamp('2016-01-15 00:00:00')
In [48]: pd.date_range("2015-01-01", freq="SMS", periods=4)
Out[48]: DatetimeIndex(['2015-01-01', '2015-01-15', '2015-02-01', '2015-02-15'], dtype='datetime64[ns]', freq='SMS-15')
Using the anchoring suffix, you can also specify the day of month to use instead of the 15th.
In [49]: pd.date_range("2015-01-01", freq="SMS-16", periods=4)
Out[49]: DatetimeIndex(['2015-01-01', '2015-01-16', '2015-02-01', '2015-02-16'], dtype='datetime64[ns]', freq='SMS-16')
In [50]: pd.date_range("2015-01-01", freq="SM-14", periods=4)
Out[50]: DatetimeIndex(['2015-01-14', '2015-01-31', '2015-02-14', '2015-02-28'], dtype='datetime64[ns]', freq='SM-14')
New Index methods¶
The following methods and options are added to Index, to be more consistent with the Series and DataFrame API.
Index now supports the .where() function for same shape indexing (GH13170)
In [51]: idx = pd.Index(["a", "b", "c"])
In [52]: idx.where([True, False, True])
Out[52]: Index(['a', nan, 'c'], dtype='object')
Index now supports .dropna() to exclude missing values (GH6194)
In [53]: idx = pd.Index([1, 2, np.nan, 4])
In [54]: idx.dropna()
Out[54]: Float64Index([1.0, 2.0, 4.0], dtype='float64')
For MultiIndex, values are dropped if any level is missing by default. Specifying
how='all' only drops values where all levels are missing.
In [55]: midx = pd.MultiIndex.from_arrays([[1, 2, np.nan, 4], [1, 2, np.nan, np.nan]])
In [56]: midx
Out[56]:
MultiIndex([(1.0, 1.0),
(2.0, 2.0),
(nan, nan),
(4.0, nan)],
)
In [57]: midx.dropna()
Out[57]:
MultiIndex([(1, 1),
(2, 2)],
)
In [58]: midx.dropna(how="all")
Out[58]:
MultiIndex([(1, 1.0),
(2, 2.0),
(4, nan)],
)
Index now supports .str.extractall() which returns a DataFrame, see the docs here (GH10008, GH13156)
In [59]: idx = pd.Index(["a1a2", "b1", "c1"])
In [60]: idx.str.extractall(r"[ab](?P<digit>\d)")
Out[60]:
digit
match
0 0 1
1 2
1 0 1
[3 rows x 1 columns]
Index.astype() now accepts an optional boolean argument copy, which allows optional copying if the requirements on dtype are satisfied (GH13209)
Google BigQuery enhancements¶
The
read_gbq()method has gained thedialectargument to allow users to specify whether to use BigQuery’s legacy SQL or BigQuery’s standard SQL. See the docs for more details (GH13615).The
to_gbq()method now allows the DataFrame column order to differ from the destination table schema (GH11359).
Fine-grained NumPy errstate¶
Previous versions of pandas would permanently silence numpy’s ufunc error handling when pandas was imported. pandas did this in order to silence the warnings that would arise from using numpy ufuncs on missing data, which are usually represented as NaN s. Unfortunately, this silenced legitimate warnings arising in non-pandas code in the application. Starting with 0.19.0, pandas will use the numpy.errstate context manager to silence these warnings in a more fine-grained manner, only around where these operations are actually used in the pandas code base. (GH13109, GH13145)
After upgrading pandas, you may see new RuntimeWarnings being issued from your code. These are likely legitimate, and the underlying cause likely existed in the code when using previous versions of pandas that simply silenced the warning. Use numpy.errstate around the source of the RuntimeWarning to control how these conditions are handled.
Method get_dummies now returns integer dtypes¶
The pd.get_dummies function now returns dummy-encoded columns as small integers, rather than floats (GH8725). This should provide an improved memory footprint.
Previous behavior:
In [1]: pd.get_dummies(['a', 'b', 'a', 'c']).dtypes
Out[1]:
a float64
b float64
c float64
dtype: object
New behavior:
In [61]: pd.get_dummies(["a", "b", "a", "c"]).dtypes
Out[61]:
a uint8
b uint8
c uint8
Length: 3, dtype: object
Downcast values to smallest possible dtype in to_numeric¶
pd.to_numeric() now accepts a downcast parameter, which will downcast the data if possible to smallest specified numerical dtype (GH13352)
In [62]: s = ["1", 2, 3]
In [63]: pd.to_numeric(s, downcast="unsigned")
Out[63]: array([1, 2, 3], dtype=uint8)
In [64]: pd.to_numeric(s, downcast="integer")
Out[64]: array([1, 2, 3], dtype=int8)
pandas development API¶
As part of making pandas API more uniform and accessible in the future, we have created a standard
sub-package of pandas, pandas.api to hold public API’s. We are starting by exposing type
introspection functions in pandas.api.types. More sub-packages and officially sanctioned API’s
will be published in future versions of pandas (GH13147, GH13634)
The following are now part of this API:
In [65]: import pprint
In [66]: from pandas.api import types
In [67]: funcs = [f for f in dir(types) if not f.startswith("_")]
In [68]: pprint.pprint(funcs)
['CategoricalDtype',
'DatetimeTZDtype',
'IntervalDtype',
'PeriodDtype',
'infer_dtype',
'is_array_like',
'is_bool',
'is_bool_dtype',
'is_categorical',
'is_categorical_dtype',
'is_complex',
'is_complex_dtype',
'is_datetime64_any_dtype',
'is_datetime64_dtype',
'is_datetime64_ns_dtype',
'is_datetime64tz_dtype',
'is_dict_like',
'is_dtype_equal',
'is_extension_array_dtype',
'is_extension_type',
'is_file_like',
'is_float',
'is_float_dtype',
'is_hashable',
'is_int64_dtype',
'is_integer',
'is_integer_dtype',
'is_interval',
'is_interval_dtype',
'is_iterator',
'is_list_like',
'is_named_tuple',
'is_number',
'is_numeric_dtype',
'is_object_dtype',
'is_period_dtype',
'is_re',
'is_re_compilable',
'is_scalar',
'is_signed_integer_dtype',
'is_sparse',
'is_string_dtype',
'is_timedelta64_dtype',
'is_timedelta64_ns_dtype',
'is_unsigned_integer_dtype',
'pandas_dtype',
'union_categoricals']
Note
Calling these functions from the internal module pandas.core.common will now show a DeprecationWarning (GH13990)
Other enhancements¶
Timestampcan now accept positional and keyword parameters similar todatetime.datetime()(GH10758, GH11630)In [69]: pd.Timestamp(2012, 1, 1) Out[69]: Timestamp('2012-01-01 00:00:00') In [70]: pd.Timestamp(year=2012, month=1, day=1, hour=8, minute=30) Out[70]: Timestamp('2012-01-01 08:30:00')
The
.resample()function now accepts aon=orlevel=parameter for resampling on a datetimelike column orMultiIndexlevel (GH13500)In [71]: df = pd.DataFrame( ....: {"date": pd.date_range("2015-01-01", freq="W", periods=5), "a": np.arange(5)}, ....: index=pd.MultiIndex.from_arrays( ....: [[1, 2, 3, 4, 5], pd.date_range("2015-01-01", freq="W", periods=5)], ....: names=["v", "d"], ....: ), ....: ) ....: In [72]: df Out[72]: date a v d 1 2015-01-04 2015-01-04 0 2 2015-01-11 2015-01-11 1 3 2015-01-18 2015-01-18 2 4 2015-01-25 2015-01-25 3 5 2015-02-01 2015-02-01 4 [5 rows x 2 columns] In [73]: df.resample("M", on="date").sum() Out[73]: a date 2015-01-31 6 2015-02-28 4 [2 rows x 1 columns] In [74]: df.resample("M", level="d").sum() Out[74]: a d 2015-01-31 6 2015-02-28 4 [2 rows x 1 columns]
The
.get_credentials()method ofGbqConnectorcan now first try to fetch the application default credentials. See the docs for more details (GH13577).The
.tz_localize()method ofDatetimeIndexandTimestamphas gained theerrorskeyword, so you can potentially coerce nonexistent timestamps toNaT. The default behavior remains to raising aNonExistentTimeError(GH13057).to_hdf/read_hdf()now accept path objects (e.g.pathlib.Path,py.path.local) for the file path (GH11773)The
pd.read_csv()withengine='python'has gained support for thedecimal(GH12933),na_filter(GH13321) and thememory_mapoption (GH13381).Consistent with the Python API,
pd.read_csv()will now interpret+infas positive infinity (GH13274)The
pd.read_html()has gained support for thena_values,converters,keep_default_naoptions (GH13461)Categorical.astype()now accepts an optional boolean argumentcopy, effective when dtype is categorical (GH13209)DataFramehas gained the.asof()method to return the last non-NaN values according to the selected subset (GH13358)The
DataFrameconstructor will now respect key ordering if a list ofOrderedDictobjects are passed in (GH13304)pd.read_html()has gained support for thedecimaloption (GH12907)Serieshas gained the properties.is_monotonic,.is_monotonic_increasing,.is_monotonic_decreasing, similar toIndex(GH13336)DataFrame.to_sql()now allows a single value as the SQL type for all columns (GH11886).Series.appendnow supports theignore_indexoption (GH13677).to_stata()andStataWritercan now write variable labels to Stata dta files using a dictionary to make column names to labels (GH13535, GH13536).to_stata()andStataWriterwill automatically convertdatetime64[ns]columns to Stata format%tc, rather than raising aValueError(GH12259)read_stata()andStataReaderraise with a more explicit error message when reading Stata files with repeated value labels whenconvert_categoricals=True(GH13923)DataFrame.stylewill now render sparsified MultiIndexes (GH11655)DataFrame.stylewill now show column level names (e.g.DataFrame.columns.names) (GH13775)DataFramehas gained support to re-order the columns based on the values in a row usingdf.sort_values(by='...', axis=1)(GH10806)In [75]: df = pd.DataFrame({"A": [2, 7], "B": [3, 5], "C": [4, 8]}, index=["row1", "row2"]) In [76]: df Out[76]: A B C row1 2 3 4 row2 7 5 8 [2 rows x 3 columns] In [77]: df.sort_values(by="row2", axis=1) Out[77]: B A C row1 3 2 4 row2 5 7 8 [2 rows x 3 columns]
Added documentation to I/O regarding the perils of reading in columns with mixed dtypes and how to handle it (GH13746)
to_html()now has aborderargument to control the value in the opening<table>tag. The default is the value of thehtml.borderoption, which defaults to 1. This also affects the notebook HTML repr, but since Jupyter’s CSS includes a border-width attribute, the visual effect is the same. (GH11563).Raise
ImportErrorin the sql functions whensqlalchemyis not installed and a connection string is used (GH11920).Compatibility with matplotlib 2.0. Older versions of pandas should also work with matplotlib 2.0 (GH13333)
Timestamp,Period,DatetimeIndex,PeriodIndexand.dtaccessor have gained a.is_leap_yearproperty to check whether the date belongs to a leap year. (GH13727)astype()will now accept a dict of column name to data types mapping as thedtypeargument. (GH12086)The
pd.read_jsonandDataFrame.to_jsonhas gained support for reading and writing json lines withlinesoption see Line delimited json (GH9180)read_excel()now supports the true_values and false_values keyword arguments (GH13347)groupby()will now accept a scalar and a single-element list for specifyinglevelon a non-MultiIndexgrouper. (GH13907)Non-convertible dates in an excel date column will be returned without conversion and the column will be
objectdtype, rather than raising an exception (GH10001).pd.Timedelta(None)is now accepted and will returnNaT, mirroringpd.Timestamp(GH13687)pd.read_stata()can now handle some format 111 files, which are produced by SAS when generating Stata dta files (GH11526)SeriesandIndexnow supportdivmodwhich will return a tuple of series or indices. This behaves like a standard binary operator with regards to broadcasting rules (GH14208).
API changes¶
Series.tolist() will now return Python types¶
Series.tolist() will now return Python types in the output, mimicking NumPy .tolist() behavior (GH10904)
In [78]: s = pd.Series([1, 2, 3])
Previous behavior:
In [7]: type(s.tolist()[0])
Out[7]:
<class 'numpy.int64'>
New behavior:
In [79]: type(s.tolist()[0])
Out[79]: int
Series operators for different indexes¶
Following Series operators have been changed to make all operators consistent,
including DataFrame (GH1134, GH4581, GH13538)
Seriescomparison operators now raiseValueErrorwhenindexare different.Serieslogical operators align bothindexof left and right hand side.
Warning
Until 0.18.1, comparing Series with the same length, would succeed even if
the .index are different (the result ignores .index). As of 0.19.0, this will raises ValueError to be more strict. This section also describes how to keep previous behavior or align different indexes, using the flexible comparison methods like .eq.
As a result, Series and DataFrame operators behave as below:
Arithmetic operators¶
Arithmetic operators align both index (no changes).
In [80]: s1 = pd.Series([1, 2, 3], index=list("ABC"))
In [81]: s2 = pd.Series([2, 2, 2], index=list("ABD"))
In [82]: s1 + s2
Out[82]:
A 3.0
B 4.0
C NaN
D NaN
Length: 4, dtype: float64
In [83]: df1 = pd.DataFrame([1, 2, 3], index=list("ABC"))
In [84]: df2 = pd.DataFrame([2, 2, 2], index=list("ABD"))
In [85]: df1 + df2
Out[85]:
0
A 3.0
B 4.0
C NaN
D NaN
[4 rows x 1 columns]
Comparison operators¶
Comparison operators raise ValueError when .index are different.
Previous behavior (Series):
Series compared values ignoring the .index as long as both had the same length:
In [1]: s1 == s2
Out[1]:
A False
B True
C False
dtype: bool
New behavior (Series):
In [2]: s1 == s2
Out[2]:
ValueError: Can only compare identically-labeled Series objects
Note
To achieve the same result as previous versions (compare values based on locations ignoring .index), compare both .values.
In [86]: s1.values == s2.values
Out[86]: array([False, True, False])
If you want to compare Series aligning its .index, see flexible comparison methods section below:
In [87]: s1.eq(s2)
Out[87]:
A False
B True
C False
D False
Length: 4, dtype: bool
Current behavior (DataFrame, no change):
In [3]: df1 == df2
Out[3]:
ValueError: Can only compare identically-labeled DataFrame objects
Logical operators¶
Logical operators align both .index of left and right hand side.
Previous behavior (Series), only left hand side index was kept:
In [4]: s1 = pd.Series([True, False, True], index=list('ABC'))
In [5]: s2 = pd.Series([True, True, True], index=list('ABD'))
In [6]: s1 & s2
Out[6]:
A True
B False
C False
dtype: bool
New behavior (Series):
In [88]: s1 = pd.Series([True, False, True], index=list("ABC"))
In [89]: s2 = pd.Series([True, True, True], index=list("ABD"))
In [90]: s1 & s2
Out[90]:
A True
B False
C False
D False
Length: 4, dtype: bool
Note
Series logical operators fill a NaN result with False.
Note
To achieve the same result as previous versions (compare values based on only left hand side index), you can use reindex_like:
In [91]: s1 & s2.reindex_like(s1)
Out[91]:
A True
B False
C False
Length: 3, dtype: bool
Current behavior (DataFrame, no change):
In [92]: df1 = pd.DataFrame([True, False, True], index=list("ABC"))
In [93]: df2 = pd.DataFrame([True, True, True], index=list("ABD"))
In [94]: df1 & df2
Out[94]:
0
A True
B False
C False
D False
[4 rows x 1 columns]
Flexible comparison methods¶
Series flexible comparison methods like eq, ne, le, lt, ge and gt now align both index. Use these operators if you want to compare two Series
which has the different index.
In [95]: s1 = pd.Series([1, 2, 3], index=["a", "b", "c"])
In [96]: s2 = pd.Series([2, 2, 2], index=["b", "c", "d"])
In [97]: s1.eq(s2)
Out[97]:
a False
b True
c False
d False
Length: 4, dtype: bool
In [98]: s1.ge(s2)
Out[98]:
a False
b True
c True
d False
Length: 4, dtype: bool
Previously, this worked the same as comparison operators (see above).
Series type promotion on assignment¶
A Series will now correctly promote its dtype for assignment with incompat values to the current dtype (GH13234)
In [99]: s = pd.Series()
Previous behavior:
In [2]: s["a"] = pd.Timestamp("2016-01-01")
In [3]: s["b"] = 3.0
TypeError: invalid type promotion
New behavior:
In [100]: s["a"] = pd.Timestamp("2016-01-01")
In [101]: s["b"] = 3.0
In [102]: s
Out[102]:
a 2016-01-01 00:00:00
b 3.0
Length: 2, dtype: object
In [103]: s.dtype
Out[103]: dtype('O')
Function .to_datetime() changes¶
Previously if .to_datetime() encountered mixed integers/floats and strings, but no datetimes with errors='coerce' it would convert all to NaT.
Previous behavior:
In [2]: pd.to_datetime([1, 'foo'], errors='coerce')
Out[2]: DatetimeIndex(['NaT', 'NaT'], dtype='datetime64[ns]', freq=None)
Current behavior:
This will now convert integers/floats with the default unit of ns.
In [104]: pd.to_datetime([1, "foo"], errors="coerce")
Out[104]: DatetimeIndex(['1970-01-01 00:00:00.000000001', 'NaT'], dtype='datetime64[ns]', freq=None)
Bug fixes related to .to_datetime():
Bug in
pd.to_datetime()when passing integers or floats, and nounitanderrors='coerce'(GH13180).Bug in
pd.to_datetime()when passing invalid data types (e.g. bool); will now respect theerrorskeyword (GH13176)Bug in
pd.to_datetime()which overflowed onint8, andint16dtypes (GH13451)Bug in
pd.to_datetime()raiseAttributeErrorwithNaNand the other string is not valid whenerrors='ignore'(GH12424)Bug in
pd.to_datetime()did not cast floats correctly whenunitwas specified, resulting in truncated datetime (GH13834)
Merging changes¶
Merging will now preserve the dtype of the join keys (GH8596)
In [105]: df1 = pd.DataFrame({"key": [1], "v1": [10]})
In [106]: df1
Out[106]:
key v1
0 1 10
[1 rows x 2 columns]
In [107]: df2 = pd.DataFrame({"key": [1, 2], "v1": [20, 30]})
In [108]: df2
Out[108]:
key v1
0 1 20
1 2 30
[2 rows x 2 columns]
Previous behavior:
In [5]: pd.merge(df1, df2, how='outer')
Out[5]:
key v1
0 1.0 10.0
1 1.0 20.0
2 2.0 30.0
In [6]: pd.merge(df1, df2, how='outer').dtypes
Out[6]:
key float64
v1 float64
dtype: object
New behavior:
We are able to preserve the join keys
In [109]: pd.merge(df1, df2, how="outer")
Out[109]:
key v1
0 1 10
1 1 20
2 2 30
[3 rows x 2 columns]
In [110]: pd.merge(df1, df2, how="outer").dtypes
Out[110]:
key int64
v1 int64
Length: 2, dtype: object
Of course if you have missing values that are introduced, then the resulting dtype will be upcast, which is unchanged from previous.
In [111]: pd.merge(df1, df2, how="outer", on="key")
Out[111]:
key v1_x v1_y
0 1 10.0 20
1 2 NaN 30
[2 rows x 3 columns]
In [112]: pd.merge(df1, df2, how="outer", on="key").dtypes
Out[112]:
key int64
v1_x float64
v1_y int64
Length: 3, dtype: object
Method .describe() changes¶
Percentile identifiers in the index of a .describe() output will now be rounded to the least precision that keeps them distinct (GH13104)
In [113]: s = pd.Series([0, 1, 2, 3, 4])
In [114]: df = pd.DataFrame([0, 1, 2, 3, 4])
Previous behavior:
The percentiles were rounded to at most one decimal place, which could raise ValueError for a data frame if the percentiles were duplicated.
In [3]: s.describe(percentiles=[0.0001, 0.0005, 0.001, 0.999, 0.9995, 0.9999])
Out[3]:
count 5.000000
mean 2.000000
std 1.581139
min 0.000000
0.0% 0.000400
0.1% 0.002000
0.1% 0.004000
50% 2.000000
99.9% 3.996000
100.0% 3.998000
100.0% 3.999600
max 4.000000
dtype: float64
In [4]: df.describe(percentiles=[0.0001, 0.0005, 0.001, 0.999, 0.9995, 0.9999])
Out[4]:
...
ValueError: cannot reindex from a duplicate axis
New behavior:
In [115]: s.describe(percentiles=[0.0001, 0.0005, 0.001, 0.999, 0.9995, 0.9999])
Out[115]:
count 5.000000
mean 2.000000
std 1.581139
min 0.000000
0.01% 0.000400
0.05% 0.002000
0.1% 0.004000
50% 2.000000
99.9% 3.996000
99.95% 3.998000
99.99% 3.999600
max 4.000000
Length: 12, dtype: float64
In [116]: df.describe(percentiles=[0.0001, 0.0005, 0.001, 0.999, 0.9995, 0.9999])
Out[116]:
0
count 5.000000
mean 2.000000
std 1.581139
min 0.000000
0.01% 0.000400
0.05% 0.002000
0.1% 0.004000
50% 2.000000
99.9% 3.996000
99.95% 3.998000
99.99% 3.999600
max 4.000000
[12 rows x 1 columns]
Furthermore:
Passing duplicated
percentileswill now raise aValueError.Bug in
.describe()on a DataFrame with a mixed-dtype column index, which would previously raise aTypeError(GH13288)
Period changes¶
The PeriodIndex now has period dtype¶
PeriodIndex now has its own period dtype. The period dtype is a
pandas extension dtype like category or the timezone aware dtype (datetime64[ns, tz]) (GH13941).
As a consequence of this change, PeriodIndex no longer has an integer dtype:
Previous behavior:
In [1]: pi = pd.PeriodIndex(['2016-08-01'], freq='D')
In [2]: pi
Out[2]: PeriodIndex(['2016-08-01'], dtype='int64', freq='D')
In [3]: pd.api.types.is_integer_dtype(pi)
Out[3]: True
In [4]: pi.dtype
Out[4]: dtype('int64')
New behavior:
In [117]: pi = pd.PeriodIndex(["2016-08-01"], freq="D")
In [118]: pi
Out[118]: PeriodIndex(['2016-08-01'], dtype='period[D]', freq='D')
In [119]: pd.api.types.is_integer_dtype(pi)
Out[119]: False
In [120]: pd.api.types.is_period_dtype(pi)
Out[120]: True
In [121]: pi.dtype
Out[121]: period[D]
In [122]: type(pi.dtype)
Out[122]: pandas.core.dtypes.dtypes.PeriodDtype
Period('NaT') now returns pd.NaT¶
Previously, Period has its own Period('NaT') representation different from pd.NaT. Now Period('NaT') has been changed to return pd.NaT. (GH12759, GH13582)
Previous behavior:
In [5]: pd.Period('NaT', freq='D')
Out[5]: Period('NaT', 'D')
New behavior:
These result in pd.NaT without providing freq option.
In [123]: pd.Period("NaT")
Out[123]: NaT
In [124]: pd.Period(None)
Out[124]: NaT
To be compatible with Period addition and subtraction, pd.NaT now supports addition and subtraction with int. Previously it raised ValueError.
Previous behavior:
In [5]: pd.NaT + 1
...
ValueError: Cannot add integral value to Timestamp without freq.
New behavior:
In [125]: pd.NaT + 1
Out[125]: NaT
In [126]: pd.NaT - 1
Out[126]: NaT
PeriodIndex.values now returns array of Period object¶
.values is changed to return an array of Period objects, rather than an array
of integers (GH13988).
Previous behavior:
In [6]: pi = pd.PeriodIndex(['2011-01', '2011-02'], freq='M')
In [7]: pi.values
Out[7]: array([492, 493])
New behavior:
In [127]: pi = pd.PeriodIndex(["2011-01", "2011-02"], freq="M")
In [128]: pi.values
Out[128]: array([Period('2011-01', 'M'), Period('2011-02', 'M')], dtype=object)
Index + / - no longer used for set operations¶
Addition and subtraction of the base Index type and of DatetimeIndex
(not the numeric index types)
previously performed set operations (set union and difference). This
behavior was already deprecated since 0.15.0 (in favor using the specific
.union() and .difference() methods), and is now disabled. When
possible, + and - are now used for element-wise operations, for
example for concatenating strings or subtracting datetimes
(GH8227, GH14127).
Previous behavior:
In [1]: pd.Index(['a', 'b']) + pd.Index(['a', 'c'])
FutureWarning: using '+' to provide set union with Indexes is deprecated, use '|' or .union()
Out[1]: Index(['a', 'b', 'c'], dtype='object')
New behavior: the same operation will now perform element-wise addition:
In [129]: pd.Index(["a", "b"]) + pd.Index(["a", "c"])
Out[129]: Index(['aa', 'bc'], dtype='object')
Note that numeric Index objects already performed element-wise operations.
For example, the behavior of adding two integer Indexes is unchanged.
The base Index is now made consistent with this behavior.
In [130]: pd.Index([1, 2, 3]) + pd.Index([2, 3, 4])
Out[130]: Int64Index([3, 5, 7], dtype='int64')
Further, because of this change, it is now possible to subtract two DatetimeIndex objects resulting in a TimedeltaIndex:
Previous behavior:
In [1]: (pd.DatetimeIndex(['2016-01-01', '2016-01-02'])
...: - pd.DatetimeIndex(['2016-01-02', '2016-01-03']))
FutureWarning: using '-' to provide set differences with datetimelike Indexes is deprecated, use .difference()
Out[1]: DatetimeIndex(['2016-01-01'], dtype='datetime64[ns]', freq=None)
New behavior:
In [131]: (
.....: pd.DatetimeIndex(["2016-01-01", "2016-01-02"])
.....: - pd.DatetimeIndex(["2016-01-02", "2016-01-03"])
.....: )
.....:
Out[131]: TimedeltaIndex(['-1 days', '-1 days'], dtype='timedelta64[ns]', freq=None)
Index.difference and .symmetric_difference changes¶
Index.difference and Index.symmetric_difference will now, more consistently, treat NaN values as any other values. (GH13514)
In [132]: idx1 = pd.Index([1, 2, 3, np.nan])
In [133]: idx2 = pd.Index([0, 1, np.nan])
Previous behavior:
In [3]: idx1.difference(idx2)
Out[3]: Float64Index([nan, 2.0, 3.0], dtype='float64')
In [4]: idx1.symmetric_difference(idx2)
Out[4]: Float64Index([0.0, nan, 2.0, 3.0], dtype='float64')
New behavior:
In [134]: idx1.difference(idx2)
Out[134]: Float64Index([2.0, 3.0], dtype='float64')
In [135]: idx1.symmetric_difference(idx2)
Out[135]: Float64Index([0.0, 2.0, 3.0], dtype='float64')
Index.unique consistently returns Index¶
Index.unique() now returns unique values as an
Index of the appropriate dtype. (GH13395).
Previously, most Index classes returned np.ndarray, and DatetimeIndex,
TimedeltaIndex and PeriodIndex returned Index to keep metadata like timezone.
Previous behavior:
In [1]: pd.Index([1, 2, 3]).unique()
Out[1]: array([1, 2, 3])
In [2]: pd.DatetimeIndex(['2011-01-01', '2011-01-02',
...: '2011-01-03'], tz='Asia/Tokyo').unique()
Out[2]:
DatetimeIndex(['2011-01-01 00:00:00+09:00', '2011-01-02 00:00:00+09:00',
'2011-01-03 00:00:00+09:00'],
dtype='datetime64[ns, Asia/Tokyo]', freq=None)
New behavior:
In [136]: pd.Index([1, 2, 3]).unique()
Out[136]: Int64Index([1, 2, 3], dtype='int64')
In [137]: pd.DatetimeIndex(
.....: ["2011-01-01", "2011-01-02", "2011-01-03"], tz="Asia/Tokyo"
.....: ).unique()
.....:
Out[137]:
DatetimeIndex(['2011-01-01 00:00:00+09:00', '2011-01-02 00:00:00+09:00',
'2011-01-03 00:00:00+09:00'],
dtype='datetime64[ns, Asia/Tokyo]', freq=None)
MultiIndex constructors, groupby and set_index preserve categorical dtypes¶
MultiIndex.from_arrays and MultiIndex.from_product will now preserve categorical dtype
in MultiIndex levels (GH13743, GH13854).
In [138]: cat = pd.Categorical(["a", "b"], categories=list("bac"))
In [139]: lvl1 = ["foo", "bar"]
In [140]: midx = pd.MultiIndex.from_arrays([cat, lvl1])
In [141]: midx
Out[141]:
MultiIndex([('a', 'foo'),
('b', 'bar')],
)
Previous behavior:
In [4]: midx.levels[0]
Out[4]: Index(['b', 'a', 'c'], dtype='object')
In [5]: midx.get_level_values[0]
Out[5]: Index(['a', 'b'], dtype='object')
New behavior: the single level is now a CategoricalIndex:
In [142]: midx.levels[0]
Out[142]: CategoricalIndex(['b', 'a', 'c'], categories=['b', 'a', 'c'], ordered=False, dtype='category')
In [143]: midx.get_level_values(0)
Out[143]: CategoricalIndex(['a', 'b'], categories=['b', 'a', 'c'], ordered=False, dtype='category')
An analogous change has been made to MultiIndex.from_product.
As a consequence, groupby and set_index also preserve categorical dtypes in indexes
In [144]: df = pd.DataFrame({"A": [0, 1], "B": [10, 11], "C": cat})
In [145]: df_grouped = df.groupby(by=["A", "C"]).first()
In [146]: df_set_idx = df.set_index(["A", "C"])
Previous behavior:
In [11]: df_grouped.index.levels[1]
Out[11]: Index(['b', 'a', 'c'], dtype='object', name='C')
In [12]: df_grouped.reset_index().dtypes
Out[12]:
A int64
C object
B float64
dtype: object
In [13]: df_set_idx.index.levels[1]
Out[13]: Index(['b', 'a', 'c'], dtype='object', name='C')
In [14]: df_set_idx.reset_index().dtypes
Out[14]:
A int64
C object
B int64
dtype: object
New behavior:
In [147]: df_grouped.index.levels[1]
Out[147]: CategoricalIndex(['b', 'a', 'c'], categories=['b', 'a', 'c'], ordered=False, name='C', dtype='category')
In [148]: df_grouped.reset_index().dtypes
Out[148]:
A int64
C category
B float64
Length: 3, dtype: object
In [149]: df_set_idx.index.levels[1]
Out[149]: CategoricalIndex(['b', 'a', 'c'], categories=['b', 'a', 'c'], ordered=False, name='C', dtype='category')
In [150]: df_set_idx.reset_index().dtypes
Out[150]:
A int64
C category
B int64
Length: 3, dtype: object
Function read_csv will progressively enumerate chunks¶
When read_csv() is called with chunksize=n and without specifying an index,
each chunk used to have an independently generated index from 0 to n-1.
They are now given instead a progressive index, starting from 0 for the first chunk,
from n for the second, and so on, so that, when concatenated, they are identical to
the result of calling read_csv() without the chunksize= argument
(GH12185).
In [151]: data = "A,B\n0,1\n2,3\n4,5\n6,7"
Previous behavior:
In [2]: pd.concat(pd.read_csv(StringIO(data), chunksize=2))
Out[2]:
A B
0 0 1
1 2 3
0 4 5
1 6 7
New behavior:
In [152]: pd.concat(pd.read_csv(StringIO(data), chunksize=2))
Out[152]:
A B
0 0 1
1 2 3
2 4 5
3 6 7
[4 rows x 2 columns]
Sparse changes¶
These changes allow pandas to handle sparse data with more dtypes, and for work to make a smoother experience with data handling.
Types int64 and bool support enhancements¶
Sparse data structures now gained enhanced support of int64 and bool dtype (GH667, GH13849).
Previously, sparse data were float64 dtype by default, even if all inputs were of int or bool dtype. You had to specify dtype explicitly to create sparse data with int64 dtype. Also, fill_value had to be specified explicitly because the default was np.nan which doesn’t appear in int64 or bool data.
In [1]: pd.SparseArray([1, 2, 0, 0])
Out[1]:
[1.0, 2.0, 0.0, 0.0]
Fill: nan
IntIndex
Indices: array([0, 1, 2, 3], dtype=int32)
# specifying int64 dtype, but all values are stored in sp_values because
# fill_value default is np.nan
In [2]: pd.SparseArray([1, 2, 0, 0], dtype=np.int64)
Out[2]:
[1, 2, 0, 0]
Fill: nan
IntIndex
Indices: array([0, 1, 2, 3], dtype=int32)
In [3]: pd.SparseArray([1, 2, 0, 0], dtype=np.int64, fill_value=0)
Out[3]:
[1, 2, 0, 0]
Fill: 0
IntIndex
Indices: array([0, 1], dtype=int32)
As of v0.19.0, sparse data keeps the input dtype, and uses more appropriate fill_value defaults (0 for int64 dtype, False for bool dtype).
In [153]: pd.SparseArray([1, 2, 0, 0], dtype=np.int64)
Out[153]:
[1, 2, 0, 0]
Fill: 0
IntIndex
Indices: array([0, 1], dtype=int32)
In [154]: pd.SparseArray([True, False, False, False])
Out[154]:
[True, False, False, False]
Fill: False
IntIndex
Indices: array([0], dtype=int32)
See the docs for more details.
Operators now preserve dtypes¶
Sparse data structure now can preserve
dtypeafter arithmetic ops (GH13848)
s = pd.SparseSeries([0, 2, 0, 1], fill_value=0, dtype=np.int64)
s.dtype
s + 1
Sparse data structure now support
astypeto convert internaldtype(GH13900)
s = pd.SparseSeries([1.0, 0.0, 2.0, 0.0], fill_value=0)
s
s.astype(np.int64)
astype fails if data contains values which cannot be converted to specified dtype.
Note that the limitation is applied to fill_value which default is np.nan.
In [7]: pd.SparseSeries([1., np.nan, 2., np.nan], fill_value=np.nan).astype(np.int64)
Out[7]:
ValueError: unable to coerce current fill_value nan to int64 dtype
Other sparse fixes¶
Subclassed
SparseDataFrameandSparseSeriesnow preserve class types when slicing or transposing. (GH13787)SparseArraywithbooldtype now supports logical (bool) operators (GH14000)Bug in
SparseSerieswithMultiIndex[]indexing may raiseIndexError(GH13144)Bug in
SparseSerieswithMultiIndex[]indexing result may have normalIndex(GH13144)Bug in
SparseDataFramein whichaxis=Nonedid not default toaxis=0(GH13048)Bug in
SparseSeriesandSparseDataFramecreation withobjectdtype may raiseTypeError(GH11633)Bug in
SparseDataFramedoesn’t respect passedSparseArrayorSparseSeries‘s dtype andfill_value(GH13866)Bug in
SparseArrayandSparseSeriesdon’t apply ufunc tofill_value(GH13853)Bug in
SparseSeries.absincorrectly keeps negativefill_value(GH13853)Bug in single row slicing on multi-type
SparseDataFrames, types were previously forced to float (GH13917)Bug in
SparseSeriesslicing changes integer dtype to float (GH8292)Bug in
SparseDataFarmecomparison ops may raiseTypeError(GH13001)Bug in
SparseDataFarme.isnullraisesValueError(GH8276)Bug in
SparseSeriesrepresentation withbooldtype may raiseIndexError(GH13110)Bug in
SparseSeriesandSparseDataFrameofboolorint64dtype may display its values likefloat64dtype (GH13110)Bug in sparse indexing using
SparseArraywithbooldtype may return incorrect result (GH13985)Bug in
SparseArraycreated fromSparseSeriesmay losedtype(GH13999)Bug in
SparseSeriescomparison with dense returns normalSeriesrather thanSparseSeries(GH13999)
Indexer dtype changes¶
Note
This change only affects 64 bit python running on Windows, and only affects relatively advanced indexing operations
Methods such as Index.get_indexer that return an indexer array, coerce that array to a “platform int”, so that it can be
directly used in 3rd party library operations like numpy.take. Previously, a platform int was defined as np.int_
which corresponds to a C integer, but the correct type, and what is being used now, is np.intp, which corresponds
to the C integer size that can hold a pointer (GH3033, GH13972).
These types are the same on many platform, but for 64 bit python on Windows,
np.int_ is 32 bits, and np.intp is 64 bits. Changing this behavior improves performance for many
operations on that platform.
Previous behavior:
In [1]: i = pd.Index(['a', 'b', 'c'])
In [2]: i.get_indexer(['b', 'b', 'c']).dtype
Out[2]: dtype('int32')
New behavior:
In [1]: i = pd.Index(['a', 'b', 'c'])
In [2]: i.get_indexer(['b', 'b', 'c']).dtype
Out[2]: dtype('int64')
Other API changes¶
Timestamp.to_pydatetimewill issue aUserWarningwhenwarn=True, and the instance has a non-zero number of nanoseconds, previously this would print a message to stdout (GH14101).Series.unique()with datetime and timezone now returns return array ofTimestampwith timezone (GH13565).Panel.to_sparse()will raise aNotImplementedErrorexception when called (GH13778).Index.reshape()will raise aNotImplementedErrorexception when called (GH12882)..filter()enforces mutual exclusion of the keyword arguments (GH12399).eval’s upcasting rules forfloat32types have been updated to be more consistent with NumPy’s rules. New behavior will not upcast tofloat64if you multiply a pandasfloat32object by a scalar float64 (GH12388).An
UnsupportedFunctionCallerror is now raised if NumPy ufuncs likenp.meanare called on groupby or resample objects (GH12811).__setitem__will no longer apply a callable rhs as a function instead of storing it. Callwheredirectly to get the previous behavior (GH13299).Calls to
.sample()will respect the random seed set vianumpy.random.seed(n)(GH13161)Styler.applyis now more strict about the outputs your function must return. Foraxis=0oraxis=1, the output shape must be identical. Foraxis=None, the output must be a DataFrame with identical columns and index labels (GH13222).Float64Index.astype(int)will now raiseValueErrorifFloat64IndexcontainsNaNvalues (GH13149)TimedeltaIndex.astype(int)andDatetimeIndex.astype(int)will now returnInt64Indexinstead ofnp.array(GH13209)Passing
Periodwith multiple frequencies to normalIndexnow returnsIndexwithobjectdtype (GH13664)PeriodIndex.fillnawithPeriodhas different freq now coerces toobjectdtype (GH13664)Faceted boxplots from
DataFrame.boxplot(by=col)now return aSerieswhenreturn_typeis not None. Previously these returned anOrderedDict. Note that whenreturn_type=None, the default, these still return a 2-D NumPy array (GH12216, GH7096).pd.read_hdfwill now raise aValueErrorinstead ofKeyError, if a mode other thanr,r+andais supplied. (GH13623)pd.read_csv(),pd.read_table(), andpd.read_hdf()raise the builtinFileNotFoundErrorexception for Python 3.x when called on a nonexistent file; this is back-ported asIOErrorin Python 2.x (GH14086)More informative exceptions are passed through the csv parser. The exception type would now be the original exception type instead of
CParserError(GH13652).pd.read_csv()in the C engine will now issue aParserWarningor raise aValueErrorwhensepencoded is more than one character long (GH14065)DataFrame.valueswill now returnfloat64with aDataFrameof mixedint64anduint64dtypes, conforming tonp.find_common_type(GH10364, GH13917).groupby.groupswill now return a dictionary ofIndexobjects, rather than a dictionary ofnp.ndarrayorlists(GH14293)
Deprecations¶
Series.reshapeandCategorical.reshapehave been deprecated and will be removed in a subsequent release (GH12882, GH12882)PeriodIndex.to_datetimehas been deprecated in favor ofPeriodIndex.to_timestamp(GH8254)Timestamp.to_datetimehas been deprecated in favor ofTimestamp.to_pydatetime(GH8254)Index.to_datetimeandDatetimeIndex.to_datetimehave been deprecated in favor ofpd.to_datetime(GH8254)pandas.core.datetoolsmodule has been deprecated and will be removed in a subsequent release (GH14094)SparseListhas been deprecated and will be removed in a future version (GH13784)DataFrame.to_html()andDataFrame.to_latex()have dropped thecolSpaceparameter in favor ofcol_space(GH13857)DataFrame.to_sql()has deprecated theflavorparameter, as it is superfluous when SQLAlchemy is not installed (GH13611)Deprecated
read_csvkeywords:compact_intsanduse_unsignedhave been deprecated and will be removed in a future version (GH13320)buffer_lineshas been deprecated and will be removed in a future version (GH13360)as_recarrayhas been deprecated and will be removed in a future version (GH13373)skip_footerhas been deprecated in favor ofskipfooterand will be removed in a future version (GH13349)
top-level
pd.ordered_merge()has been renamed topd.merge_ordered()and the original name will be removed in a future version (GH13358)Timestamp.offsetproperty (and named arg in the constructor), has been deprecated in favor offreq(GH12160)pd.tseries.util.pivot_annualis deprecated. Usepivot_tableas alternative, an example is here (GH736)pd.tseries.util.isleapyearhas been deprecated and will be removed in a subsequent release. Datetime-likes now have a.is_leap_yearproperty (GH13727)Panel4DandPanelNDconstructors are deprecated and will be removed in a future version. The recommended way to represent these types of n-dimensional data are with the xarray package. pandas provides ato_xarray()method to automate this conversion (GH13564).pandas.tseries.frequencies.get_standard_freqis deprecated. Usepandas.tseries.frequencies.to_offset(freq).rule_codeinstead (GH13874)pandas.tseries.frequencies.to_offset’sfreqstrkeyword is deprecated in favor offreq(GH13874)Categorical.from_arrayhas been deprecated and will be removed in a future version (GH13854)
Removal of prior version deprecations/changes¶
The
SparsePanelclass has been removed (GH13778)The
pd.sandboxmodule has been removed in favor of the external librarypandas-qt(GH13670)The
pandas.io.dataandpandas.io.wbmodules are removed in favor of the pandas-datareader package (GH13724).The
pandas.tools.rplotmodule has been removed in favor of the seaborn package (GH13855)DataFrame.to_csv()has dropped theengineparameter, as was deprecated in 0.17.1 (GH11274, GH13419)DataFrame.to_dict()has dropped theouttypeparameter in favor oforient(GH13627, GH8486)pd.Categoricalhas dropped setting of theorderedattribute directly in favor of theset_orderedmethod (GH13671)pd.Categoricalhas dropped thelevelsattribute in favor ofcategories(GH8376)DataFrame.to_sql()has dropped themysqloption for theflavorparameter (GH13611)Panel.shift()has dropped thelagsparameter in favor ofperiods(GH14041)pd.Indexhas dropped thediffmethod in favor ofdifference(GH13669)pd.DataFramehas dropped theto_widemethod in favor ofto_panel(GH14039)Series.to_csvhas dropped thenanRepparameter in favor ofna_rep(GH13804)Series.xs,DataFrame.xs,Panel.xs,Panel.major_xs, andPanel.minor_xshave dropped thecopyparameter (GH13781)str.splithas dropped thereturn_typeparameter in favor ofexpand(GH13701)Removal of the legacy time rules (offset aliases), deprecated since 0.17.0 (this has been alias since 0.8.0) (GH13590, GH13868). Now legacy time rules raises
ValueError. For the list of currently supported offsets, see here.The default value for the
return_typeparameter forDataFrame.plot.boxandDataFrame.boxplotchanged fromNoneto"axes". These methods will now return a matplotlib axes by default instead of a dictionary of artists. See here (GH6581).The
tqueryanduqueryfunctions in thepandas.io.sqlmodule are removed (GH5950).
Performance improvements¶
Improved performance of sparse
IntIndex.intersect(GH13082)Improved performance of sparse arithmetic with
BlockIndexwhen the number of blocks are large, though recommended to useIntIndexin such cases (GH13082)Improved performance of
DataFrame.quantile()as it now operates per-block (GH11623)Improved performance of float64 hash table operations, fixing some very slow indexing and groupby operations in python 3 (GH13166, GH13334)
Improved performance of
DataFrameGroupBy.transform(GH12737)Improved performance of
IndexandSeries.duplicated(GH10235)Improved performance of
Index.difference(GH12044)Improved performance of
RangeIndex.is_monotonic_increasingandis_monotonic_decreasing(GH13749)Improved performance of datetime string parsing in
DatetimeIndex(GH13692)Improved performance of hashing
Period(GH12817)Improved performance of
factorizeof datetime with timezone (GH13750)Improved performance of by lazily creating indexing hashtables on larger Indexes (GH14266)
Improved performance of
groupby.groups(GH14293)Unnecessary materializing of a MultiIndex when introspecting for memory usage (GH14308)
Bug fixes¶
Bug in
groupby().shift(), which could cause a segfault or corruption in rare circumstances when grouping by columns with missing values (GH13813)Bug in
groupby().cumsum()calculatingcumprodwhenaxis=1. (GH13994)Bug in
pd.to_timedelta()in which theerrorsparameter was not being respected (GH13613)Bug in
io.json.json_normalize(), where non-ascii keys raised an exception (GH13213)Bug when passing a not-default-indexed
Seriesasxerroryerrin.plot()(GH11858)Bug in area plot draws legend incorrectly if subplot is enabled or legend is moved after plot (matplotlib 1.5.0 is required to draw area plot legend properly) (GH9161, GH13544)
Bug in
DataFrameassignment with an object-dtypedIndexwhere the resultant column is mutable to the original object. (GH13522)Bug in matplotlib
AutoDataFormatter; this restores the second scaled formatting and re-adds micro-second scaled formatting (GH13131)Bug in selection from a
HDFStorewith a fixed format andstartand/orstopspecified will now return the selected range (GH8287)Bug in
Categorical.from_codes()where an unhelpful error was raised when an invalidorderedparameter was passed in (GH14058)Bug in
Seriesconstruction from a tuple of integers on windows not returning default dtype (int64) (GH13646)Bug in
TimedeltaIndexaddition with a Datetime-like object where addition overflow was not being caught (GH14068)Bug in
.groupby(..).resample(..)when the same object is called multiple times (GH13174)Bug in
.to_records()when index name is a unicode string (GH13172)Bug in calling
.memory_usage()on object which doesn’t implement (GH12924)Regression in
Series.quantilewith nans (also shows up in.median()and.describe()); furthermore now names theSerieswith the quantile (GH13098, GH13146)Bug in
SeriesGroupBy.transformwith datetime values and missing groups (GH13191)Bug where empty
Serieswere incorrectly coerced in datetime-like numeric operations (GH13844)Bug in
Categoricalconstructor when passed aCategoricalcontaining datetimes with timezones (GH14190)Bug in
Series.str.extractall()withstrindex raisesValueError(GH13156)Bug in
Series.str.extractall()with single group and quantifier (GH13382)Bug in
DatetimeIndexandPeriodsubtraction raisesValueErrororAttributeErrorrather thanTypeError(GH13078)Bug in
IndexandSeriescreated withNaNandNaTmixed data may not havedatetime64dtype (GH13324)Bug in
IndexandSeriesmay ignorenp.datetime64('nat')andnp.timdelta64('nat')to infer dtype (GH13324)Bug in
PeriodIndexandPeriodsubtraction raisesAttributeError(GH13071)Bug in
PeriodIndexconstruction returning afloat64index in some circumstances (GH13067)Bug in
.resample(..)with aPeriodIndexnot changing itsfreqappropriately when empty (GH13067)Bug in
.resample(..)with aPeriodIndexnot retaining its type or name with an emptyDataFrameappropriately when empty (GH13212)Bug in
groupby(..).apply(..)when the passed function returns scalar values per group (GH13468).Bug in
groupby(..).resample(..)where passing some keywords would raise an exception (GH13235)Bug in
.tz_converton a tz-awareDateTimeIndexthat relied on index being sorted for correct results (GH13306)Bug in
.tz_localizewithdateutil.tz.tzlocalmay return incorrect result (GH13583)Bug in
DatetimeTZDtypedtype withdateutil.tz.tzlocalcannot be regarded as valid dtype (GH13583)Bug in
pd.read_hdf()where attempting to load an HDF file with a single dataset, that had one or more categorical columns, failed unless the key argument was set to the name of the dataset. (GH13231)Bug in
.rolling()that allowed a negative integer window in construction of theRolling()object, but would later fail on aggregation (GH13383)Bug in
Seriesindexing with tuple-valued data and a numeric index (GH13509)Bug in printing
pd.DataFramewhere unusual elements with theobjectdtype were causing segfaults (GH13717)Bug in ranking
Serieswhich could result in segfaults (GH13445)Bug in various index types, which did not propagate the name of passed index (GH12309)
Bug in
DatetimeIndex, which did not honour thecopy=True(GH13205)Bug in
DatetimeIndex.is_normalizedreturns incorrectly for normalized date_range in case of local timezones (GH13459)Bug in
pd.concatand.appendmay coercesdatetime64andtimedeltatoobjectdtype containing python built-indatetimeortimedeltarather thanTimestamporTimedelta(GH13626)Bug in
PeriodIndex.appendmay raisesAttributeErrorwhen the result isobjectdtype (GH13221)Bug in
CategoricalIndex.appendmay accept normallist(GH13626)Bug in
pd.concatand.appendwith the same timezone get reset to UTC (GH7795)Bug in
SeriesandDataFrame.appendraisesAmbiguousTimeErrorif data contains datetime near DST boundary (GH13626)Bug in
DataFrame.to_csv()in which float values were being quoted even though quotations were specified for non-numeric values only (GH12922, GH13259)Bug in
DataFrame.describe()raisingValueErrorwith only boolean columns (GH13898)Bug in
MultiIndexslicing where extra elements were returned when level is non-unique (GH12896)Bug in
.str.replacedoes not raiseTypeErrorfor invalid replacement (GH13438)Bug in
MultiIndex.from_arrayswhich didn’t check for input array lengths matching (GH13599)Bug in
cartesian_productandMultiIndex.from_productwhich may raise with empty input arrays (GH12258)Bug in
pd.read_csv()which may cause a segfault or corruption when iterating in large chunks over a stream/file under rare circumstances (GH13703)Bug in
pd.read_csv()which caused errors to be raised when a dictionary containing scalars is passed in forna_values(GH12224)Bug in
pd.read_csv()which caused BOM files to be incorrectly parsed by not ignoring the BOM (GH4793)Bug in
pd.read_csv()withengine='python'which raised errors when a numpy array was passed in forusecols(GH12546)Bug in
pd.read_csv()where the index columns were being incorrectly parsed when parsed as dates with athousandsparameter (GH14066)Bug in
pd.read_csv()withengine='python'in whichNaNvalues weren’t being detected after data was converted to numeric values (GH13314)Bug in
pd.read_csv()in which thenrowsargument was not properly validated for both engines (GH10476)Bug in
pd.read_csv()withengine='python'in which infinities of mixed-case forms were not being interpreted properly (GH13274)Bug in
pd.read_csv()withengine='python'in which trailingNaNvalues were not being parsed (GH13320)Bug in
pd.read_csv()withengine='python'when reading from atempfile.TemporaryFileon Windows with Python 3 (GH13398)Bug in
pd.read_csv()that preventsusecolskwarg from accepting single-byte unicode strings (GH13219)Bug in
pd.read_csv()that preventsusecolsfrom being an empty set (GH13402)Bug in
pd.read_csv()in the C engine where the NULL character was not being parsed as NULL (GH14012)Bug in
pd.read_csv()withengine='c'in which NULLquotecharwas not accepted even thoughquotingwas specified asNone(GH13411)Bug in
pd.read_csv()withengine='c'in which fields were not properly cast to float when quoting was specified as non-numeric (GH13411)Bug in
pd.read_csv()in Python 2.x with non-UTF8 encoded, multi-character separated data (GH3404)Bug in
pd.read_csv(), where aliases for utf-xx (e.g. UTF-xx, UTF_xx, utf_xx) raised UnicodeDecodeError (GH13549)Bug in
pd.read_csv,pd.read_table,pd.read_fwf,pd.read_stataandpd.read_saswhere files were opened by parsers but not closed if bothchunksizeanditeratorwereNone. (GH13940)Bug in
StataReader,StataWriter,XportReaderandSAS7BDATReaderwhere a file was not properly closed when an error was raised. (GH13940)Bug in
pd.pivot_table()wheremargins_nameis ignored whenaggfuncis a list (GH13354)Bug in
pd.Series.str.zfill,center,ljust,rjust, andpadwhen passing non-integers, did not raiseTypeError(GH13598)Bug in checking for any null objects in a
TimedeltaIndex, which always returnedTrue(GH13603)Bug in
Seriesarithmetic raisesTypeErrorif it contains datetime-like asobjectdtype (GH13043)Bug
Series.isnull()andSeries.notnull()ignorePeriod('NaT')(GH13737)Bug
Series.fillna()andSeries.dropna()don’t affect toPeriod('NaT')(GH13737Bug in
.fillna(value=np.nan)incorrectly raisesKeyErroron acategorydtypedSeries(GH14021)Bug in extension dtype creation where the created types were not is/identical (GH13285)
Bug in
.resample(..)where incorrect warnings were triggered by IPython introspection (GH13618)Bug in
NaT-PeriodraisesAttributeError(GH13071)Bug in
Seriescomparison may output incorrect result if rhs containsNaT(GH9005)Bug in
SeriesandIndexcomparison may output incorrect result if it containsNaTwithobjectdtype (GH13592)Bug in
Periodaddition raisesTypeErrorifPeriodis on right hand side (GH13069)Bug in
PeriodandSeriesorIndexcomparison raisesTypeError(GH13200)Bug in
pd.set_eng_float_format()that would prevent NaN and Inf from formatting (GH11981)Bug in
.unstackwithCategoricaldtype resets.orderedtoTrue(GH13249)Clean some compile time warnings in datetime parsing (GH13607)
Bug in
factorizeraisesAmbiguousTimeErrorif data contains datetime near DST boundary (GH13750)Bug in
.set_indexraisesAmbiguousTimeErrorif new index contains DST boundary and multi levels (GH12920)Bug in
.shiftraisesAmbiguousTimeErrorif data contains datetime near DST boundary (GH13926)Bug in
pd.read_hdf()returns incorrect result when aDataFramewith acategoricalcolumn and a query which doesn’t match any values (GH13792)Bug in
.ilocwhen indexing with a non lexsorted MultiIndex (GH13797)Bug in
.locwhen indexing with date strings in a reverse sortedDatetimeIndex(GH14316)Bug in
Seriescomparison operators when dealing with zero dim NumPy arrays (GH13006)Bug in
.combine_firstmay return incorrectdtype(GH7630, GH10567)Bug in
groupbywhereapplyreturns different result depending on whether first result isNoneor not (GH12824)Bug in
groupby(..).nth()where the group key is included inconsistently if called after.head()/.tail()(GH12839)Bug in
.to_html,.to_latexand.to_stringsilently ignore custom datetime formatter passed through theformatterskey word (GH10690)Bug in
DataFrame.iterrows(), not yielding aSeriessubclasse if defined (GH13977)Bug in
pd.to_numericwhenerrors='coerce'and input contains non-hashable objects (GH13324)Bug in invalid
Timedeltaarithmetic and comparison may raiseValueErrorrather thanTypeError(GH13624)Bug in invalid datetime parsing in
to_datetimeandDatetimeIndexmay raiseTypeErrorrather thanValueError(GH11169, GH11287)Bug in
Indexcreated with tz-awareTimestampand mismatchedtzoption incorrectly coerces timezone (GH13692)Bug in
DatetimeIndexwith nanosecond frequency does not include timestamp specified withend(GH13672)Bug in
`Serieswhen setting a slice with anp.timedelta64(GH14155)Bug in
IndexraisesOutOfBoundsDatetimeifdatetimeexceedsdatetime64[ns]bounds, rather than coercing toobjectdtype (GH13663)Bug in
Indexmay ignore specifieddatetime64ortimedelta64passed asdtype(GH13981)Bug in
RangeIndexcan be created without no arguments rather than raisesTypeError(GH13793)Bug in
.value_counts()raisesOutOfBoundsDatetimeif data exceedsdatetime64[ns]bounds (GH13663)Bug in
DatetimeIndexmay raiseOutOfBoundsDatetimeif inputnp.datetime64has other unit thanns(GH9114)Bug in
Seriescreation withnp.datetime64which has other unit thannsasobjectdtype results in incorrect values (GH13876)Bug in
resamplewith timedelta data where data was casted to float (GH13119).Bug in
pd.isnull()pd.notnull()raiseTypeErrorif input datetime-like has other unit thanns(GH13389)Bug in
pd.merge()may raiseTypeErrorif input datetime-like has other unit thanns(GH13389)Bug in
HDFStore/read_hdf()discardedDatetimeIndex.nameiftzwas set (GH13884)Bug in
Categorical.remove_unused_categories()changes.codesdtype to platform int (GH13261)Bug in
groupbywithas_index=Falsereturns all NaN’s when grouping on multiple columns including a categorical one (GH13204)Bug in
df.groupby(...)[...]where getitem withInt64Indexraised an error (GH13731)Bug in the CSS classes assigned to
DataFrame.stylefor index names. Previously they were assigned"col_heading level<n> col<c>"wherenwas the number of levels + 1. Now they are assigned"index_name level<n>", wherenis the correct level for that MultiIndex.Bug where
pd.read_gbq()could throwImportError: No module named discoveryas a result of a naming conflict with another python package called apiclient (GH13454)Bug in
Index.unionreturns an incorrect result with a named empty index (GH13432)Bugs in
Index.differenceandDataFrame.joinraise in Python3 when using mixed-integer indexes (GH13432, GH12814)Bug in subtract tz-aware
datetime.datetimefrom tz-awaredatetime64series (GH14088)Bug in
.to_excel()when DataFrame contains a MultiIndex which contains a label with a NaN value (GH13511)Bug in invalid frequency offset string like “D1”, “-2-3H” may not raise
ValueError(GH13930)Bug in
concatandgroupbyfor hierarchical frames withRangeIndexlevels (GH13542).Bug in
Series.str.contains()for Series containing onlyNaNvalues ofobjectdtype (GH14171)Bug in
agg()function on groupby dataframe changes dtype ofdatetime64[ns]column tofloat64(GH12821)Bug in using NumPy ufunc with
PeriodIndexto add or subtract integer raiseIncompatibleFrequency. Note that using standard operator like+or-is recommended, because standard operators use more efficient path (GH13980)Bug in operations on
NaTreturningfloatinstead ofdatetime64[ns](GH12941)Bug in
Seriesflexible arithmetic methods (like.add()) raisesValueErrorwhenaxis=None(GH13894)Bug in
DataFrame.to_csv()withMultiIndexcolumns in which a stray empty line was added (GH6618)Bug in
DatetimeIndex,TimedeltaIndexandPeriodIndex.equals()may returnTruewhen input isn’tIndexbut contains the same values (GH13107)Bug in assignment against datetime with timezone may not work if it contains datetime near DST boundary (GH14146)
Bug in
pd.eval()andHDFStorequery truncating long float literals with python 2 (GH14241)Bug in
IndexraisesKeyErrordisplaying incorrect column when column is not in the df and columns contains duplicate values (GH13822)Bug in
PeriodandPeriodIndexcreating wrong dates when frequency has combined offset aliases (GH13874)Bug in
.to_string()when called with an integerline_widthandindex=Falseraises an UnboundLocalError exception becauseidxreferenced before assignment.Bug in
eval()where theresolversargument would not accept a list (GH14095)Bugs in
stack,get_dummies,make_axis_dummieswhich don’t preserve categorical dtypes in (multi)indexes (GH13854)PeriodIndexcan now acceptlistandarraywhich containspd.NaT(GH13430)Bug in
df.groupbywhere.median()returns arbitrary values if grouped dataframe contains empty bins (GH13629)Bug in
Index.copy()wherenameparameter was ignored (GH14302)
Contributors¶
A total of 117 people contributed patches to this release. People with a “+” by their names contributed a patch for the first time.
Adrien Emery +
Alex Alekseyev
Alex Vig +
Allen Riddell +
Amol +
Amol Agrawal +
Andy R. Terrel +
Anthonios Partheniou
Ben Kandel +
Bob Baxley +
Brett Rosen +
Camilo Cota +
Chris
Chris Grinolds
Chris Warth
Christian Hudon
Christopher C. Aycock
Daniel Siladji +
Douglas McNeil
Drewrey Lupton +
Eduardo Blancas Reyes +
Elliot Marsden +
Evan Wright
Felix Marczinowski +
Francis T. O’Donovan
Geraint Duck +
Giacomo Ferroni +
Grant Roch +
Gábor Lipták
Haleemur Ali +
Hassan Shamim +
Iulius Curt +
Ivan Nazarov +
Jeff Reback
Jeffrey Gerard +
Jenn Olsen +
Jim Crist
Joe Jevnik
John Evans +
John Freeman
John Liekezer +
John W. O’Brien
John Zwinck +
Johnny Gill +
Jordan Erenrich +
Joris Van den Bossche
Josh Howes +
Jozef Brandys +
Ka Wo Chen
Kamil Sindi +
Kerby Shedden
Kernc +
Kevin Sheppard
Matthieu Brucher +
Maximilian Roos
Michael Scherer +
Mike Graham +
Mortada Mehyar
Muhammad Haseeb Tariq +
Nate George +
Neil Parley +
Nicolas Bonnotte
OXPHOS
Pan Deng / Zora +
Paul +
Paul Mestemaker +
Pauli Virtanen
Pawel Kordek +
Pietro Battiston
Piotr Jucha +
Ravi Kumar Nimmi +
Robert Gieseke
Robert Kern +
Roger Thomas
Roy Keyes +
Russell Smith +
Sahil Dua +
Sanjiv Lobo +
Sašo Stanovnik +
Shawn Heide +
Sinhrks
Stephen Kappel +
Steve Choi +
Stewart Henderson +
Sudarshan Konge +
Thomas A Caswell
Tom Augspurger
Tom Bird +
Uwe Hoffmann +
WillAyd +
Xiang Zhang +
YG-Riku +
Yadunandan +
Yaroslav Halchenko
Yuichiro Kaneko +
adneu
agraboso +
babakkeyvani +
c123w +
chris-b1
cmazzullo +
conquistador1492 +
cr3 +
dsm054
gfyoung
harshul1610 +
iamsimha +
jackieleng +
mpuels +
pijucha +
priyankjain +
sinhrks
wcwagner +
yui-knk +
zhangjinjie +
znmean +
颜发才(Yan Facai) +