Frequently Asked Questions (FAQ)#
DataFrame memory usage#
The memory usage of a DataFrame
(including the index) is shown when calling
the info()
. A configuration option, display.memory_usage
(see the list of options), specifies if the
DataFrame
memory usage will be displayed when invoking the df.info()
method.
For example, the memory usage of the DataFrame
below is shown
when calling info()
:
In [1]: dtypes = [
...: "int64",
...: "float64",
...: "datetime64[ns]",
...: "timedelta64[ns]",
...: "complex128",
...: "object",
...: "bool",
...: ]
...:
In [2]: n = 5000
In [3]: data = {t: np.random.randint(100, size=n).astype(t) for t in dtypes}
In [4]: df = pd.DataFrame(data)
In [5]: df["categorical"] = df["object"].astype("category")
In [6]: df.info()
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 5000 entries, 0 to 4999
Data columns (total 8 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 int64 5000 non-null int64
1 float64 5000 non-null float64
2 datetime64[ns] 5000 non-null datetime64[ns]
3 timedelta64[ns] 5000 non-null timedelta64[ns]
4 complex128 5000 non-null complex128
5 object 5000 non-null object
6 bool 5000 non-null bool
7 categorical 5000 non-null category
dtypes: bool(1), category(1), complex128(1), datetime64[ns](1), float64(1), int64(1), object(1), timedelta64[ns](1)
memory usage: 288.2+ KB
The +
symbol indicates that the true memory usage could be higher, because
pandas does not count the memory used by values in columns with
dtype=object
.
Passing memory_usage='deep'
will enable a more accurate memory usage report,
accounting for the full usage of the contained objects. This is optional
as it can be expensive to do this deeper introspection.
In [7]: df.info(memory_usage="deep")
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 5000 entries, 0 to 4999
Data columns (total 8 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 int64 5000 non-null int64
1 float64 5000 non-null float64
2 datetime64[ns] 5000 non-null datetime64[ns]
3 timedelta64[ns] 5000 non-null timedelta64[ns]
4 complex128 5000 non-null complex128
5 object 5000 non-null object
6 bool 5000 non-null bool
7 categorical 5000 non-null category
dtypes: bool(1), category(1), complex128(1), datetime64[ns](1), float64(1), int64(1), object(1), timedelta64[ns](1)
memory usage: 424.7 KB
By default the display option is set to True
but can be explicitly
overridden by passing the memory_usage
argument when invoking df.info()
.
The memory usage of each column can be found by calling the
memory_usage()
method. This returns a Series
with an index
represented by column names and memory usage of each column shown in bytes. For
the DataFrame
above, the memory usage of each column and the total memory
usage can be found with the memory_usage
method:
In [8]: df.memory_usage()
Out[8]:
Index 128
int64 40000
float64 40000
datetime64[ns] 40000
timedelta64[ns] 40000
complex128 80000
object 40000
bool 5000
categorical 9968
dtype: int64
# total memory usage of dataframe
In [9]: df.memory_usage().sum()
Out[9]: 295096
By default the memory usage of the DataFrame
index is shown in the
returned Series
, the memory usage of the index can be suppressed by passing
the index=False
argument:
In [10]: df.memory_usage(index=False)
Out[10]:
int64 40000
float64 40000
datetime64[ns] 40000
timedelta64[ns] 40000
complex128 80000
object 40000
bool 5000
categorical 9968
dtype: int64
The memory usage displayed by the info()
method utilizes the
memory_usage()
method to determine the memory usage of a
DataFrame
while also formatting the output in human-readable units (base-2
representation; i.e. 1KB = 1024 bytes).
See also Categorical Memory Usage.
Using if/truth statements with pandas#
pandas follows the NumPy convention of raising an error when you try to convert
something to a bool
. This happens in an if
-statement or when using the
boolean operations: and
, or
, and not
. It is not clear what the result
of the following code should be:
>>> if pd.Series([False, True, False]):
... pass
Should it be True
because it’s not zero-length, or False
because there
are False
values? It is unclear, so instead, pandas raises a ValueError
:
In [11]: if pd.Series([False, True, False]):
....: print("I was true")
....:
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
Cell In[11], line 1
----> 1 if pd.Series([False, True, False]):
2 print("I was true")
File ~/work/pandas/pandas/pandas/core/generic.py:1527, in NDFrame.__nonzero__(self)
1525 @final
1526 def __nonzero__(self) -> NoReturn:
-> 1527 raise ValueError(
1528 f"The truth value of a {type(self).__name__} is ambiguous. "
1529 "Use a.empty, a.bool(), a.item(), a.any() or a.all()."
1530 )
ValueError: The truth value of a Series is ambiguous. Use a.empty, a.bool(), a.item(), a.any() or a.all().
You need to explicitly choose what you want to do with the DataFrame
, e.g.
use any()
, all()
or empty()
.
Alternatively, you might want to compare if the pandas object is None
:
In [12]: if pd.Series([False, True, False]) is not None:
....: print("I was not None")
....:
I was not None
Below is how to check if any of the values are True
:
In [13]: if pd.Series([False, True, False]).any():
....: print("I am any")
....:
I am any
To evaluate single-element pandas objects in a boolean context, use the method
bool()
:
In [14]: pd.Series([True]).bool()
Out[14]: True
In [15]: pd.Series([False]).bool()
Out[15]: False
In [16]: pd.DataFrame([[True]]).bool()
Out[16]: True
In [17]: pd.DataFrame([[False]]).bool()
Out[17]: False
Bitwise boolean#
Bitwise boolean operators like ==
and !=
return a boolean Series
which performs an element-wise comparison when compared to a scalar.
In [18]: s = pd.Series(range(5))
In [19]: s == 4
Out[19]:
0 False
1 False
2 False
3 False
4 True
dtype: bool
See boolean comparisons for more examples.
Using the in
operator#
Using the Python in
operator on a Series
tests for membership in the
index, not membership among the values.
In [20]: s = pd.Series(range(5), index=list("abcde"))
In [21]: 2 in s
Out[21]: False
In [22]: 'b' in s
Out[22]: True
If this behavior is surprising, keep in mind that using in
on a Python
dictionary tests keys, not values, and Series
are dict-like.
To test for membership in the values, use the method isin()
:
In [23]: s.isin([2])
Out[23]:
a False
b False
c True
d False
e False
dtype: bool
In [24]: s.isin([2]).any()
Out[24]: True
For DataFrame
, likewise, in
applies to the column axis,
testing for membership in the list of column names.
Mutating with User Defined Function (UDF) methods#
This section applies to pandas methods that take a UDF. In particular, the methods
.apply
, .aggregate
, .transform
, and .filter
.
It is a general rule in programming that one should not mutate a container while it is being iterated over. Mutation will invalidate the iterator, causing unexpected behavior. Consider the example:
In [25]: values = [0, 1, 2, 3, 4, 5]
In [26]: n_removed = 0
In [27]: for k, value in enumerate(values):
....: idx = k - n_removed
....: if value % 2 == 1:
....: del values[idx]
....: n_removed += 1
....: else:
....: values[idx] = value + 1
....:
In [28]: values
Out[28]: [1, 4, 5]
One probably would have expected that the result would be [1, 3, 5]
.
When using a pandas method that takes a UDF, internally pandas is often
iterating over the
DataFrame
or other pandas object. Therefore, if the UDF mutates (changes)
the DataFrame
, unexpected behavior can arise.
Here is a similar example with DataFrame.apply()
:
In [29]: def f(s):
....: s.pop("a")
....: return s
....:
In [30]: df = pd.DataFrame({"a": [1, 2, 3], "b": [4, 5, 6]})
In [31]: try:
....: df.apply(f, axis="columns")
....: except Exception as err:
....: print(repr(err))
....:
KeyError('a')
To resolve this issue, one can make a copy so that the mutation does not apply to the container being iterated over.
In [32]: values = [0, 1, 2, 3, 4, 5]
In [33]: n_removed = 0
In [34]: for k, value in enumerate(values.copy()):
....: idx = k - n_removed
....: if value % 2 == 1:
....: del values[idx]
....: n_removed += 1
....: else:
....: values[idx] = value + 1
....:
In [35]: values
Out[35]: [1, 3, 5]
In [36]: def f(s):
....: s = s.copy()
....: s.pop("a")
....: return s
....:
In [37]: df = pd.DataFrame({"a": [1, 2, 3], 'b': [4, 5, 6]})
In [38]: df.apply(f, axis="columns")
Out[38]:
b
0 4
1 5
2 6
NaN
, Integer NA
values and NA
type promotions#
Choice of NA
representation#
For lack of NA
(missing) support from the ground up in NumPy and Python in
general, we were given the difficult choice between either:
A masked array solution: an array of data and an array of boolean values indicating whether a value is there or is missing.
Using a special sentinel value, bit pattern, or set of sentinel values to denote
NA
across the dtypes.
For many reasons we chose the latter. After years of production use it has
proven, at least in my opinion, to be the best decision given the state of
affairs in NumPy and Python in general. The special value NaN
(Not-A-Number) is used everywhere as the NA
value, and there are API
functions DataFrame.isna()
and DataFrame.notna()
which can be used across the dtypes to
detect NA values.
However, it comes with it a couple of trade-offs which I most certainly have not ignored.
Support for integer NA
#
In the absence of high performance NA
support being built into NumPy from
the ground up, the primary casualty is the ability to represent NAs in integer
arrays. For example:
In [39]: s = pd.Series([1, 2, 3, 4, 5], index=list("abcde"))
In [40]: s
Out[40]:
a 1
b 2
c 3
d 4
e 5
dtype: int64
In [41]: s.dtype
Out[41]: dtype('int64')
In [42]: s2 = s.reindex(["a", "b", "c", "f", "u"])
In [43]: s2
Out[43]:
a 1.0
b 2.0
c 3.0
f NaN
u NaN
dtype: float64
In [44]: s2.dtype
Out[44]: dtype('float64')
This trade-off is made largely for memory and performance reasons, and also so
that the resulting Series
continues to be “numeric”.
If you need to represent integers with possibly missing values, use one of the nullable-integer extension dtypes provided by pandas
In [45]: s_int = pd.Series([1, 2, 3, 4, 5], index=list("abcde"), dtype=pd.Int64Dtype())
In [46]: s_int
Out[46]:
a 1
b 2
c 3
d 4
e 5
dtype: Int64
In [47]: s_int.dtype
Out[47]: Int64Dtype()
In [48]: s2_int = s_int.reindex(["a", "b", "c", "f", "u"])
In [49]: s2_int
Out[49]:
a 1
b 2
c 3
f <NA>
u <NA>
dtype: Int64
In [50]: s2_int.dtype
Out[50]: Int64Dtype()
See Nullable integer data type for more.
NA
type promotions#
When introducing NAs into an existing Series
or DataFrame
via
reindex()
or some other means, boolean and integer types will be
promoted to a different dtype in order to store the NAs. The promotions are
summarized in this table:
Typeclass |
Promotion dtype for storing NAs |
---|---|
|
no change |
|
no change |
|
cast to |
|
cast to |
While this may seem like a heavy trade-off, I have found very few cases where this is an issue in practice i.e. storing values greater than 2**53. Some explanation for the motivation is in the next section.
Why not make NumPy like R?#
Many people have suggested that NumPy should simply emulate the NA
support
present in the more domain-specific statistical programming language R. Part of the reason is the NumPy type hierarchy:
Typeclass |
Dtypes |
---|---|
|
|
|
|
|
|
|
|
|
|
|
|
The R language, by contrast, only has a handful of built-in data types:
integer
, numeric
(floating-point), character
, and
boolean
. NA
types are implemented by reserving special bit patterns for
each type to be used as the missing value. While doing this with the full NumPy
type hierarchy would be possible, it would be a more substantial trade-off
(especially for the 8- and 16-bit data types) and implementation undertaking.
An alternate approach is that of using masked arrays. A masked array is an
array of data with an associated boolean mask denoting whether each value
should be considered NA
or not. I am personally not in love with this
approach as I feel that overall it places a fairly heavy burden on the user and
the library implementer. Additionally, it exacts a fairly high performance cost
when working with numerical data compared with the simple approach of using
NaN
. Thus, I have chosen the Pythonic “practicality beats purity” approach
and traded integer NA
capability for a much simpler approach of using a
special value in float and object arrays to denote NA
, and promoting
integer arrays to floating when NAs must be introduced.
Differences with NumPy#
For Series
and DataFrame
objects, var()
normalizes by
N-1
to produce unbiased estimates of the population variance, while NumPy’s
numpy.var()
normalizes by N, which measures the variance of the sample. Note that
cov()
normalizes by N-1
in both pandas and NumPy.
Thread-safety#
pandas is not 100% thread safe. The known issues relate to
the copy()
method. If you are doing a lot of copying of
DataFrame
objects shared among threads, we recommend holding locks inside
the threads where the data copying occurs.
See this link for more information.
Byte-ordering issues#
Occasionally you may have to deal with data that were created on a machine with a different byte order than the one on which you are running Python. A common symptom of this issue is an error like:
Traceback
...
ValueError: Big-endian buffer not supported on little-endian compiler
To deal
with this issue you should convert the underlying NumPy array to the native
system byte order before passing it to Series
or DataFrame
constructors using something similar to the following:
In [51]: x = np.array(list(range(10)), ">i4") # big endian
In [52]: newx = x.byteswap().newbyteorder() # force native byteorder
In [53]: s = pd.Series(newx)
See the NumPy documentation on byte order for more details.