v0.13.1 (February 3, 2014)¶
This is a minor release from 0.13.0 and includes a small number of API changes, several new features, enhancements, and performance improvements along with a large number of bug fixes. We recommend that all users upgrade to this version.
Highlights include:
- Added
infer_datetime_format
keyword toread_csv/to_datetime
to allow speedups for homogeneously formatted datetimes. - Will intelligently limit display precision for datetime/timedelta formats.
- Enhanced Panel
apply()
method. - Suggested tutorials in new Tutorials section.
- Our pandas ecosystem is growing, We now feature related projects in a new Pandas Ecosystem section.
- Much work has been taking place on improving the docs, and a new Contributing section has been added.
- Even though it may only be of interest to devs, we <3 our new CI status page: ScatterCI.
Warning
0.13.1 fixes a bug that was caused by a combination of having numpy < 1.8, and doing chained assignment on a string-like array. Please review the docs, chained indexing can have unexpected results and should generally be avoided.
This would previously segfault:
In [1]: df = pd.DataFrame({'A': np.array(['foo', 'bar', 'bah', 'foo', 'bar'])})
In [2]: df['A'].iloc[0] = np.nan
In [3]: df
Out[3]:
A
0 NaN
1 bar
2 bah
3 foo
4 bar
The recommended way to do this type of assignment is:
In [4]: df = pd.DataFrame({'A': np.array(['foo', 'bar', 'bah', 'foo', 'bar'])})
In [5]: df.loc[0, 'A'] = np.nan
In [6]: df
Out[6]:
A
0 NaN
1 bar
2 bah
3 foo
4 bar
Output Formatting Enhancements¶
df.info() view now display dtype info per column (GH5682)
df.info() now honors the option
max_info_rows
, to disable null counts for large frames (GH5974)In [7]: max_info_rows = pd.get_option('max_info_rows') In [8]: df = pd.DataFrame({'A': np.random.randn(10), ...: 'B': np.random.randn(10), ...: 'C': pd.date_range('20130101', periods=10) ...: }) ...: In [9]: df.iloc[3:6, [0, 2]] = np.nan
# set to not display the null counts In [10]: pd.set_option('max_info_rows', 0) In [11]: df.info() <class 'pandas.core.frame.DataFrame'> RangeIndex: 10 entries, 0 to 9 Data columns (total 3 columns): A float64 B float64 C datetime64[ns] dtypes: datetime64[ns](1), float64(2) memory usage: 320.0 bytes
# this is the default (same as in 0.13.0) In [12]: pd.set_option('max_info_rows', max_info_rows) In [13]: df.info() <class 'pandas.core.frame.DataFrame'> RangeIndex: 10 entries, 0 to 9 Data columns (total 3 columns): A 7 non-null float64 B 10 non-null float64 C 7 non-null datetime64[ns] dtypes: datetime64[ns](1), float64(2) memory usage: 320.0 bytes
Add
show_dimensions
display option for the new DataFrame repr to control whether the dimensions print.In [14]: df = pd.DataFrame([[1, 2], [3, 4]]) In [15]: pd.set_option('show_dimensions', False) In [16]: df Out[16]: 0 1 0 1 2 1 3 4 In [17]: pd.set_option('show_dimensions', True) In [18]: df Out[18]: 0 1 0 1 2 1 3 4 [2 rows x 2 columns]
The
ArrayFormatter
fordatetime
andtimedelta64
now intelligently limit precision based on the values in the array (GH3401)Previously output might look like:
age today diff 0 2001-01-01 00:00:00 2013-04-19 00:00:00 4491 days, 00:00:00 1 2004-06-01 00:00:00 2013-04-19 00:00:00 3244 days, 00:00:00
Now the output looks like:
In [19]: df = pd.DataFrame([pd.Timestamp('20010101'), ....: pd.Timestamp('20040601')], columns=['age']) ....: In [20]: df['today'] = pd.Timestamp('20130419') In [21]: df['diff'] = df['today'] - df['age'] In [22]: df Out[22]: age today diff 0 2001-01-01 2013-04-19 4491 days 1 2004-06-01 2013-04-19 3244 days [2 rows x 3 columns]
API changes¶
Add
-NaN
and-nan
to the default set of NA values (GH5952). See NA Values.Added
Series.str.get_dummies
vectorized string method (GH6021), to extract dummy/indicator variables for separated string columns:In [23]: s = pd.Series(['a', 'a|b', np.nan, 'a|c']) In [24]: s.str.get_dummies(sep='|') Out[24]: a b c 0 1 0 0 1 1 1 0 2 0 0 0 3 1 0 1 [4 rows x 3 columns]
Added the
NDFrame.equals()
method to compare if two NDFrames are equal have equal axes, dtypes, and values. Added thearray_equivalent
function to compare if two ndarrays are equal. NaNs in identical locations are treated as equal. (GH5283) See also the docs for a motivating example.df = pd.DataFrame({'col': ['foo', 0, np.nan]}) df2 = pd.DataFrame({'col': [np.nan, 0, 'foo']}, index=[2, 1, 0]) df.equals(df2) df.equals(df2.sort_index())
DataFrame.apply
will use thereduce
argument to determine whether aSeries
or aDataFrame
should be returned when theDataFrame
is empty (GH6007).Previously, calling
DataFrame.apply
an emptyDataFrame
would return either aDataFrame
if there were no columns, or the function being applied would be called with an emptySeries
to guess whether aSeries
orDataFrame
should be returned:In [32]: def applied_func(col): ....: print("Apply function being called with: ", col) ....: return col.sum() ....: In [33]: empty = DataFrame(columns=['a', 'b']) In [34]: empty.apply(applied_func) Apply function being called with: Series([], Length: 0, dtype: float64) Out[34]: a NaN b NaN Length: 2, dtype: float64
Now, when
apply
is called on an emptyDataFrame
: if thereduce
argument isTrue
aSeries
will returned, if it isFalse
aDataFrame
will be returned, and if it isNone
(the default) the function being applied will be called with an empty series to try and guess the return type.In [35]: empty.apply(applied_func, reduce=True) Out[35]: a NaN b NaN Length: 2, dtype: float64 In [36]: empty.apply(applied_func, reduce=False) Out[36]: Empty DataFrame Columns: [a, b] Index: [] [0 rows x 2 columns]
Prior Version Deprecations/Changes¶
There are no announced changes in 0.13 or prior that are taking effect as of 0.13.1
Deprecations¶
There are no deprecations of prior behavior in 0.13.1
Enhancements¶
pd.read_csv
andpd.to_datetime
learned a newinfer_datetime_format
keyword which greatly improves parsing perf in many cases. Thanks to @lexual for suggesting and @danbirken for rapidly implementing. (GH5490, GH6021)If
parse_dates
is enabled and this flag is set, pandas will attempt to infer the format of the datetime strings in the columns, and if it can be inferred, switch to a faster method of parsing them. In some cases this can increase the parsing speed by ~5-10x.# Try to infer the format for the index column df = pd.read_csv('foo.csv', index_col=0, parse_dates=True, infer_datetime_format=True)
date_format
anddatetime_format
keywords can now be specified when writing toexcel
files (GH4133)MultiIndex.from_product
convenience function for creating a MultiIndex from the cartesian product of a set of iterables (GH6055):In [25]: shades = ['light', 'dark'] In [26]: colors = ['red', 'green', 'blue'] In [27]: pd.MultiIndex.from_product([shades, colors], names=['shade', 'color']) Out[27]: MultiIndex(levels=[['dark', 'light'], ['blue', 'green', 'red']], codes=[[1, 1, 1, 0, 0, 0], [2, 1, 0, 2, 1, 0]], names=['shade', 'color'])
Panel
apply()
will work on non-ufuncs. See the docs.In [28]: import pandas.util.testing as tm In [29]: panel = tm.makePanel(5) In [30]: panel Out[30]: <class 'pandas.core.panel.Panel'> Dimensions: 3 (items) x 5 (major_axis) x 4 (minor_axis) Items axis: ItemA to ItemC Major_axis axis: 2000-01-03 00:00:00 to 2000-01-07 00:00:00 Minor_axis axis: A to D In [31]: panel['ItemA']