Sparse data structures¶
Note
The SparsePanel
class has been removed in 0.19.0
We have implemented “sparse” versions of Series and DataFrame. These are not sparse
in the typical “mostly 0”. Rather, you can view these objects as being “compressed”
where any data matching a specific value (NaN
/ missing value, though any value
can be chosen) is omitted. A special SparseIndex
object tracks where data has been
“sparsified”. This will make much more sense in an example. All of the standard pandas
data structures have a to_sparse
method:
In [1]: ts = pd.Series(randn(10))
In [2]: ts[2:-2] = np.nan
In [3]: sts = ts.to_sparse()
In [4]: sts
Out[4]:
0 0.469112
1 -0.282863
2 NaN
3 NaN
4 NaN
5 NaN
6 NaN
7 NaN
8 -0.861849
9 -2.104569
dtype: float64
BlockIndex
Block locations: array([0, 8], dtype=int32)
Block lengths: array([2, 2], dtype=int32)
The to_sparse
method takes a kind
argument (for the sparse index, see
below) and a fill_value
. So if we had a mostly zero Series, we could
convert it to sparse with fill_value=0
:
In [5]: ts.fillna(0).to_sparse(fill_value=0)
Out[5]:
0 0.469112
1 -0.282863
2 0.000000
3 0.000000
4 0.000000
5 0.000000
6 0.000000
7 0.000000
8 -0.861849
9 -2.104569
dtype: float64
BlockIndex
Block locations: array([0, 8], dtype=int32)
Block lengths: array([2, 2], dtype=int32)
The sparse objects exist for memory efficiency reasons. Suppose you had a large, mostly NA DataFrame:
In [6]: df = pd.DataFrame(randn(10000, 4))
In [7]: df.ix[:9998] = np.nan
In [8]: sdf = df.to_sparse()
In [9]: sdf
Out[9]:
0 1 2 3
0 NaN NaN NaN NaN
1 NaN NaN NaN NaN
2 NaN NaN NaN NaN
3 NaN NaN NaN NaN
4 NaN NaN NaN NaN
5 NaN NaN NaN NaN
6 NaN NaN NaN NaN
... ... ... ... ...
9993 NaN NaN NaN NaN
9994 NaN NaN NaN NaN
9995 NaN NaN NaN NaN
9996 NaN NaN NaN NaN
9997 NaN NaN NaN NaN
9998 NaN NaN NaN NaN
9999 0.280249 -1.648493 1.490865 -0.890819
[10000 rows x 4 columns]
In [10]: sdf.density
Out[10]: 0.0001
As you can see, the density (% of values that have not been “compressed”) is extremely low. This sparse object takes up much less memory on disk (pickled) and in the Python interpreter. Functionally, their behavior should be nearly identical to their dense counterparts.
Any sparse object can be converted back to the standard dense form by calling
to_dense
:
In [11]: sts.to_dense()
Out[11]:
0 0.469112
1 -0.282863
2 NaN
3 NaN
4 NaN
5 NaN
6 NaN
7 NaN
8 -0.861849
9 -2.104569
dtype: float64
SparseArray¶
SparseArray
is the base layer for all of the sparse indexed data
structures. It is a 1-dimensional ndarray-like object storing only values
distinct from the fill_value
:
In [12]: arr = np.random.randn(10)
In [13]: arr[2:5] = np.nan; arr[7:8] = np.nan
In [14]: sparr = pd.SparseArray(arr)
In [15]: sparr
Out[15]:
[-1.95566352972, -1.6588664276, nan, nan, nan, 1.15893288864, 0.145297113733, nan, 0.606027190513, 1.33421134013]
Fill: nan
IntIndex
Indices: array([0, 1, 5, 6, 8, 9], dtype=int32)
Like the indexed objects (SparseSeries, SparseDataFrame), a SparseArray
can be converted back to a regular ndarray by calling to_dense
:
In [16]: sparr.to_dense()
Out[16]:
array([-1.9557, -1.6589, nan, nan, nan, 1.1589, 0.1453,
nan, 0.606 , 1.3342])
SparseList¶
The SparseList
class has been deprecated and will be removed in a future version.
See the docs of a previous version
for documentation on SparseList
.
SparseIndex objects¶
Two kinds of SparseIndex
are implemented, block
and integer
. We
recommend using block
as it’s more memory efficient. The integer
format
keeps an arrays of all of the locations where the data are not equal to the
fill value. The block
format tracks only the locations and sizes of blocks
of data.
Sparse Dtypes¶
Sparse data should have the same dtype as its dense representation. Currently,
float64
, int64
and bool
dtypes are supported. Depending on the original
dtype, fill_value
default changes:
float64
:np.nan
int64
:0
bool
:False
In [17]: s = pd.Series([1, np.nan, np.nan])
In [18]: s
Out[18]:
0 1.0
1 NaN
2 NaN
dtype: float64
In [19]: s.to_sparse()
Out[19]:
0 1.0
1 NaN
2 NaN
dtype: float64
BlockIndex
Block locations: array([0], dtype=int32)
Block lengths: array([1], dtype=int32)
In [20]: s = pd.Series([1, 0, 0])
In [21]: s
Out[21]:
0 1
1 0
2 0
dtype: int64
In [22]: s.to_sparse()
Out[22]:
0 1
1 0
2 0
dtype: int64
BlockIndex
Block locations: array([0], dtype=int32)
Block lengths: array([1], dtype=int32)
In [23]: s = pd.Series([True, False, True])
In [24]: s
Out[24]:
0 True
1 False
2 True
dtype: bool
In [25]: s.to_sparse()
Out[25]:
0 True
1 False
2 True
dtype: bool
BlockIndex
Block locations: array([0, 2], dtype=int32)
Block lengths: array([1, 1], dtype=int32)
You can change the dtype using .astype()
, the result is also sparse. Note that
.astype()
also affects to the fill_value
to keep its dense represantation.
In [26]: s = pd.Series([1, 0, 0, 0, 0])
In [27]: s
Out[27]:
0 1
1 0
2 0
3 0
4 0
dtype: int64
In [28]: ss = s.to_sparse()
In [29]: ss
Out[29]:
0 1
1 0
2 0
3 0
4 0
dtype: int64
BlockIndex
Block locations: array([0], dtype=int32)
Block lengths: array([1], dtype=int32)
In [30]: ss.astype(np.float64)
Out[30]:
0 1.0
1 0.0
2 0.0
3 0.0
4 0.0
dtype: float64
BlockIndex
Block locations: array([0], dtype=int32)
Block lengths: array([1], dtype=int32)
It raises if any value cannot be coerced to specified dtype.
In [1]: ss = pd.Series([1, np.nan, np.nan]).to_sparse()
0 1.0
1 NaN
2 NaN
dtype: float64
BlockIndex
Block locations: array([0], dtype=int32)
Block lengths: array([1], dtype=int32)
In [2]: ss.astype(np.int64)
ValueError: unable to coerce current fill_value nan to int64 dtype
Sparse Calculation¶
You can apply NumPy ufuncs to SparseArray
and get a SparseArray
as a result.
In [31]: arr = pd.SparseArray([1., np.nan, np.nan, -2., np.nan])
In [32]: np.abs(arr)
Out[32]:
[1.0, nan, nan, 2.0, nan]
Fill: nan
IntIndex
Indices: array([0, 3], dtype=int32)
The ufunc is also applied to fill_value
. This is needed to get
the correct dense result.
In [33]: arr = pd.SparseArray([1., -1, -1, -2., -1], fill_value=-1)
In [34]: np.abs(arr)
Out[34]:
[1.0, 1, 1, 2.0, 1]
Fill: 1
IntIndex
Indices: array([0, 3], dtype=int32)
In [35]: np.abs(arr).to_dense()
Out[35]: array([ 1., 1., 1., 2., 1.])
Interaction with scipy.sparse¶
Experimental api to transform between sparse pandas and scipy.sparse structures.
A SparseSeries.to_coo()
method is implemented for transforming a SparseSeries
indexed by a MultiIndex
to a scipy.sparse.coo_matrix
.
The method requires a MultiIndex
with two or more levels.
In [36]: s = pd.Series([3.0, np.nan, 1.0, 3.0, np.nan, np.nan])
In [37]: s.index = pd.MultiIndex.from_tuples([(1, 2, 'a', 0),
....: (1, 2, 'a', 1),
....: (1, 1, 'b', 0),
....: (1, 1, 'b', 1),
....: (2, 1, 'b', 0),
....: (2, 1, 'b', 1)],
....: names=['A', 'B', 'C', 'D'])
....:
In [38]: s
Out[38]:
A B C D
1 2 a 0 3.0
1 NaN
1 b 0 1.0
1 3.0
2 1 b 0 NaN
1 NaN
dtype: float64
# SparseSeries
In [39]: ss = s.to_sparse()
In [40]: ss
Out[40]:
A B C D
1 2 a 0 3.0
1 NaN
1 b 0 1.0
1 3.0
2 1 b 0 NaN
1 NaN
dtype: float64
BlockIndex
Block locations: array([0, 2], dtype=int32)
Block lengths: array([1, 2], dtype=int32)
In the example below, we transform the SparseSeries
to a sparse representation of a 2-d array by specifying that the first and second MultiIndex
levels define labels for the rows and the third and fourth levels define labels for the columns. We also specify that the column and row labels should be sorted in the final sparse representation.
In [41]: A, rows, columns = ss.to_coo(row_levels=['A', 'B'],
....: column_levels=['C', 'D'],
....: sort_labels=True)
....:
In [42]: A
Out[42]:
<3x4 sparse matrix of type '<type 'numpy.float64'>'
with 3 stored elements in COOrdinate format>
In [43]: A.todense()
Out[43]:
matrix([[ 0., 0., 1., 3.],
[ 3., 0., 0., 0.],
[ 0., 0., 0., 0.]])
In [44]: rows
Out[44]: [(1, 1), (1, 2), (2, 1)]
In [45]: columns
Out[45]: [('a', 0), ('a', 1), ('b', 0), ('b', 1)]
Specifying different row and column labels (and not sorting them) yields a different sparse matrix:
In [46]: A, rows, columns = ss.to_coo(row_levels=['A', 'B', 'C'],
....: column_levels=['D'],
....: sort_labels=False)
....:
In [47]: A
Out[47]:
<3x2 sparse matrix of type '<type 'numpy.float64'>'
with 3 stored elements in COOrdinate format>
In [48]: A.todense()
Out[48]:
matrix([[ 3., 0.],
[ 1., 3.],
[ 0., 0.]])
In [49]: rows
Out[49]: [(1, 2, 'a'), (1, 1, 'b'), (2, 1, 'b')]
In [50]: columns
Out[50]: [0, 1]
A convenience method SparseSeries.from_coo()
is implemented for creating a SparseSeries
from a scipy.sparse.coo_matrix
.
In [51]: from scipy import sparse
In [52]: A = sparse.coo_matrix(([3.0, 1.0, 2.0], ([1, 0, 0], [0, 2, 3])),
....: shape=(3, 4))
....:
In [53]: A
Out[53]:
<3x4 sparse matrix of type '<type 'numpy.float64'>'
with 3 stored elements in COOrdinate format>
In [54]: A.todense()
Out[54]:
matrix([[ 0., 0., 1., 2.],
[ 3., 0., 0., 0.],
[ 0., 0., 0., 0.]])
The default behaviour (with dense_index=False
) simply returns a SparseSeries
containing
only the non-null entries.
In [55]: ss = pd.SparseSeries.from_coo(A)
In [56]: ss
Out[56]:
0 2 1.0
3 2.0
1 0 3.0
dtype: float64
BlockIndex
Block locations: array([0], dtype=int32)
Block lengths: array([3], dtype=int32)
Specifying dense_index=True
will result in an index that is the Cartesian product of the
row and columns coordinates of the matrix. Note that this will consume a significant amount of memory
(relative to dense_index=False
) if the sparse matrix is large (and sparse) enough.
In [57]: ss_dense = pd.SparseSeries.from_coo(A, dense_index=True)
In [58]: ss_dense
Out[58]:
0 0 NaN
1 NaN
2 1.0
3 2.0
1 0 3.0
1 NaN
2 NaN
3 NaN
2 0 NaN
1 NaN
2 NaN
3 NaN
dtype: float64
BlockIndex
Block locations: array([2], dtype=int32)
Block lengths: array([3], dtype=int32)