Table Of Contents

Search

Enter search terms or a module, class or function name.

Indexing and Selecting Data

The axis labeling information in pandas objects serves many purposes:

  • Identifies data (i.e. provides metadata) using known indicators, important for for analysis, visualization, and interactive console display
  • Enables automatic and explicit data alignment
  • Allows intuitive getting and setting of subsets of the data set

In this section / chapter, we will focus on the final point: namely, how to slice, dice, and generally get and set subsets of pandas objects. The primary focus will be on Series and DataFrame as they have received more development attention in this area. Expect more work to be invested higher-dimensional data structures (including Panel) in the future, especially in label-based advanced indexing.

Note

The Python and NumPy indexing operators [] and attribute operator . provide quick and easy access to pandas data structures across a wide range of use cases. This makes interactive work intuitive, as there’s little new to learn if you already know how to deal with Python dictionaries and NumPy arrays. However, since the type of the data to be accessed isn’t known in advance, directly using standard operators has some optimization limits. For production code, we recommended that you take advantage of the optimized pandas data access methods exposed in this chapter.

In addition, whether a copy or a reference is returned for a selection operation, may depend on the context. See Returning a View versus Copy

See the cookbook for some advanced strategies

Choice

Starting in 0.11.0, object selection has had a number of user-requested additions in order to support more explicit location based indexing. Pandas now supports three types of multi-axis indexing.

  • .loc is strictly label based, will raise KeyError when the items are not found, allowed inputs are:

    • A single label, e.g. 5 or 'a', (note that 5 is interpreted as a label of the index. This use is not an integer position along the index)
    • A list or array of labels ['a', 'b', 'c']
    • A slice object with labels 'a':'f', (note that contrary to usual python slices, both the start and the stop are included!)
    • A boolean array

    See more at Selection by Label

  • .iloc is strictly integer position based (from 0 to length-1 of the axis), will raise IndexError when the requested indicies are out of bounds. Allowed inputs are:

    • An integer e.g. 5
    • A list or array of integers [4, 3, 0]
    • A slice object with ints 1:7

    See more at Selection by Position

  • .ix supports mixed integer and label based access. It is primarily label based, but will fallback to integer positional access. .ix is the most general and will support any of the inputs to .loc and .iloc, as well as support for floating point label schemes. .ix is especially useful when dealing with mixed positional and label based hierarchial indexes.

    As using integer slices with .ix have different behavior depending on whether the slice is interpreted as position based or label based, it’s usually better to be explicit and use .iloc or .loc.

    See more at Advanced Indexing, Advanced Hierarchical and Fallback Indexing

Getting values from an object with multi-axes selection uses the following notation (using .loc as an example, but applies to .iloc and .ix as well). Any of the axes accessors may be the null slice :. Axes left out of the specification are assumed to be :. (e.g. p.loc['a'] is equiv to p.loc['a',:,:])

Object Type Indexers
Series s.loc[indexer]
DataFrame df.loc[row_indexer,column_indexer]
Panel p.loc[item_indexer,major_indexer,minor_indexer]

Deprecations

Beginning with version 0.11.0, it’s recommended that you transition away from the following methods as they may be deprecated in future versions.

  • irow
  • icol
  • iget_value

See the section Selection by Position for substitutes.

Basics

As mentioned when introducing the data structures in the last section, the primary function of indexing with [] (a.k.a. __getitem__ for those familiar with implementing class behavior in Python) is selecting out lower-dimensional slices. Thus,

Object Type Selection Return Value Type
Series series[label] scalar value
DataFrame frame[colname] Series corresponding to colname
Panel panel[itemname] DataFrame corresponing to the itemname

Here we construct a simple time series data set to use for illustrating the indexing functionality:

In [1]: dates = date_range('1/1/2000', periods=8)

In [2]: df = DataFrame(randn(8, 4), index=dates, columns=['A', 'B', 'C', 'D'])

In [3]: df

                   A         B         C         D
2000-01-01  0.469112 -0.282863 -1.509059 -1.135632
2000-01-02  1.212112 -0.173215  0.119209 -1.044236
2000-01-03 -0.861849 -2.104569 -0.494929  1.071804
2000-01-04  0.721555 -0.706771 -1.039575  0.271860
2000-01-05 -0.424972  0.567020  0.276232 -1.087401
2000-01-06 -0.673690  0.113648 -1.478427  0.524988
2000-01-07  0.404705  0.577046 -1.715002 -1.039268
2000-01-08 -0.370647 -1.157892 -1.344312  0.844885

In [4]: panel = Panel({'one' : df, 'two' : df - df.mean()})

In [5]: panel

<class 'pandas.core.panel.Panel'>
Dimensions: 2 (items) x 8 (major_axis) x 4 (minor_axis)
Items axis: one to two
Major_axis axis: 2000-01-01 00:00:00 to 2000-01-08 00:00:00
Minor_axis axis: A to D

Note

None of the indexing functionality is time series specific unless specifically stated.

Thus, as per above, we have the most basic indexing using []:

In [6]: s = df['A']

In [7]: s[dates[5]]
-0.67368970808837059

In [8]: panel['two']

                   A         B         C         D
2000-01-01  0.409571  0.113086 -0.610826 -0.936507
2000-01-02  1.152571  0.222735  1.017442 -0.845111
2000-01-03 -0.921390 -1.708620  0.403304  1.270929
2000-01-04  0.662014 -0.310822 -0.141342  0.470985
2000-01-05 -0.484513  0.962970  1.174465 -0.888276
2000-01-06 -0.733231  0.509598 -0.580194  0.724113
2000-01-07  0.345164  0.972995 -0.816769 -0.840143
2000-01-08 -0.430188 -0.761943 -0.446079  1.044010

You can pass a list of columns to [] to select columns in that order. If a column is not contained in the DataFrame, an exception will be raised. Multiple columns can also be set in this manner:

In [9]: df

                   A         B         C         D
2000-01-01  0.469112 -0.282863 -1.509059 -1.135632
2000-01-02  1.212112 -0.173215  0.119209 -1.044236
2000-01-03 -0.861849 -2.104569 -0.494929  1.071804
2000-01-04  0.721555 -0.706771 -1.039575  0.271860
2000-01-05 -0.424972  0.567020  0.276232 -1.087401
2000-01-06 -0.673690  0.113648 -1.478427  0.524988
2000-01-07  0.404705  0.577046 -1.715002 -1.039268
2000-01-08 -0.370647 -1.157892 -1.344312  0.844885

In [10]: df[['B', 'A']] = df[['A', 'B']]

In [11]: df

                   A         B         C         D
2000-01-01 -0.282863  0.469112 -1.509059 -1.135632
2000-01-02 -0.173215  1.212112  0.119209 -1.044236
2000-01-03 -2.104569 -0.861849 -0.494929  1.071804
2000-01-04 -0.706771  0.721555 -1.039575  0.271860
2000-01-05  0.567020 -0.424972  0.276232 -1.087401
2000-01-06  0.113648 -0.673690 -1.478427  0.524988
2000-01-07  0.577046  0.404705 -1.715002 -1.039268
2000-01-08 -1.157892 -0.370647 -1.344312  0.844885

You may find this useful for applying a transform (in-place) to a subset of the columns.

Attribute Access

You may access a column on a DataFrame, and a item on a Panel directly as an attribute:

In [12]: df.A

2000-01-01   -0.282863
2000-01-02   -0.173215
2000-01-03   -2.104569
2000-01-04   -0.706771
2000-01-05    0.567020
2000-01-06    0.113648
2000-01-07    0.577046
2000-01-08   -1.157892
Freq: D, Name: A, dtype: float64

In [13]: panel.one

                   A         B         C         D
2000-01-01  0.469112 -0.282863 -1.509059 -1.135632
2000-01-02  1.212112 -0.173215  0.119209 -1.044236
2000-01-03 -0.861849 -2.104569 -0.494929  1.071804
2000-01-04  0.721555 -0.706771 -1.039575  0.271860
2000-01-05 -0.424972  0.567020  0.276232 -1.087401
2000-01-06 -0.673690  0.113648 -1.478427  0.524988
2000-01-07  0.404705  0.577046 -1.715002 -1.039268
2000-01-08 -0.370647 -1.157892 -1.344312  0.844885

If you are using the IPython environment, you may also use tab-completion to see these accessable attributes.

Slicing ranges

The most robust and consistent way of slicing ranges along arbitrary axes is described in the Selection by Position section detailing the .iloc method. For now, we explain the semantics of slicing using the [] operator.

With Series, the syntax works exactly as with an ndarray, returning a slice of the values and the corresponding labels:

In [14]: s[:5]

2000-01-01   -0.282863
2000-01-02   -0.173215
2000-01-03   -2.104569
2000-01-04   -0.706771
2000-01-05    0.567020
Freq: D, Name: A, dtype: float64

In [15]: s[::2]

2000-01-01   -0.282863
2000-01-03   -2.104569
2000-01-05    0.567020
2000-01-07    0.577046
Freq: 2D, Name: A, dtype: float64

In [16]: s[::-1]

2000-01-08   -1.157892
2000-01-07    0.577046
2000-01-06    0.113648
2000-01-05    0.567020
2000-01-04   -0.706771
2000-01-03   -2.104569
2000-01-02   -0.173215
2000-01-01   -0.282863
Freq: -1D, Name: A, dtype: float64

Note that setting works as well:

In [17]: s2 = s.copy()

In [18]: s2[:5] = 0

In [19]: s2

2000-01-01    0.000000
2000-01-02    0.000000
2000-01-03    0.000000
2000-01-04    0.000000
2000-01-05    0.000000
2000-01-06    0.113648
2000-01-07    0.577046
2000-01-08   -1.157892
Freq: D, Name: A, dtype: float64

With DataFrame, slicing inside of [] slices the rows. This is provided largely as a convenience since it is such a common operation.

In [20]: df[:3]

                   A         B         C         D
2000-01-01 -0.282863  0.469112 -1.509059 -1.135632
2000-01-02 -0.173215  1.212112  0.119209 -1.044236
2000-01-03 -2.104569 -0.861849 -0.494929  1.071804

In [21]: df[::-1]

                   A         B         C         D
2000-01-08 -1.157892 -0.370647 -1.344312  0.844885
2000-01-07  0.577046  0.404705 -1.715002 -1.039268
2000-01-06  0.113648 -0.673690 -1.478427  0.524988
2000-01-05  0.567020 -0.424972  0.276232 -1.087401
2000-01-04 -0.706771  0.721555 -1.039575  0.271860
2000-01-03 -2.104569 -0.861849 -0.494929  1.071804
2000-01-02 -0.173215  1.212112  0.119209 -1.044236
2000-01-01 -0.282863  0.469112 -1.509059 -1.135632

Selection By Label

Pandas provides a suite of methods in order to have purely label based indexing. This is a strict inclusion based protocol. ALL of the labels for which you ask, must be in the index or a KeyError will be raised! When slicing, the start bound is included, AND the stop bound is included. Integers are valid labels, but they refer to the label and not the position.

The .loc attribute is the primary access method. The following are valid inputs:

  • A single label, e.g. 5 or 'a', (note that 5 is interpreted as a label of the index. This use is not an integer position along the index)
  • A list or array of labels ['a', 'b', 'c']
  • A slice object with labels 'a':'f' (note that contrary to usual python slices, both the start and the stop are included!)
  • A boolean array
In [22]: s1 = Series(np.random.randn(6),index=list('abcdef'))

In [23]: s1

a    1.075770
b   -0.109050
c    1.643563
d   -1.469388
e    0.357021
f   -0.674600
dtype: float64

In [24]: s1.loc['c':]

c    1.643563
d   -1.469388
e    0.357021
f   -0.674600
dtype: float64

In [25]: s1.loc['b']
-0.10904997528022223

Note that setting works as well:

In [26]: s1.loc['c':] = 0

In [27]: s1

a    1.07577
b   -0.10905
c    0.00000
d    0.00000
e    0.00000
f    0.00000
dtype: float64

With a DataFrame

In [28]: df1 = DataFrame(np.random.randn(6,4),
   ....:                 index=list('abcdef'),
   ....:                 columns=list('ABCD'))
   ....: 

In [29]: df1

          A         B         C         D
a -1.776904 -0.968914 -1.294524  0.413738
b  0.276662 -0.472035 -0.013960 -0.362543
c -0.006154 -0.923061  0.895717  0.805244
d -1.206412  2.565646  1.431256  1.340309
e -1.170299 -0.226169  0.410835  0.813850
f  0.132003 -0.827317 -0.076467 -1.187678

In [30]: df1.loc[['a','b','d'],:]

          A         B         C         D
a -1.776904 -0.968914 -1.294524  0.413738
b  0.276662 -0.472035 -0.013960 -0.362543
d -1.206412  2.565646  1.431256  1.340309

Accessing via label slices

In [31]: df1.loc['d':,'A':'C']

          A         B         C
d -1.206412  2.565646  1.431256
e -1.170299 -0.226169  0.410835
f  0.132003 -0.827317 -0.076467

For getting a cross section using a label (equiv to df.xs('a'))

In [32]: df1.loc['a']

A   -1.776904
B   -0.968914
C   -1.294524
D    0.413738
Name: a, dtype: float64

For getting values with a boolean array

In [33]: df1.loc['a']>0

A    False
B    False
C    False
D     True
Name: a, dtype: bool

In [34]: df1.loc[:,df1.loc['a']>0]

          D
a  0.413738
b -0.362543
c  0.805244
d  1.340309
e  0.813850
f -1.187678

For getting a value explicity (equiv to deprecated df.get_value('a','A'))

# this is also equivalent to ``df1.at['a','A']``
In [35]: df1.loc['a','A']
-1.7769037169718671

Selection By Position

Pandas provides a suite of methods in order to get purely integer based indexing. The semantics follow closely python and numpy slicing. These are 0-based indexing. When slicing, the start bounds is included, while the upper bound is excluded. Trying to use a non-integer, even a valid label will raise a IndexError.

The .iloc attribute is the primary access method. The following are valid inputs:

  • An integer e.g. 5
  • A list or array of integers [4, 3, 0]
  • A slice object with ints 1:7
In [36]: s1 = Series(np.random.randn(5),index=range(0,10,2))

In [37]: s1

0    1.130127
2   -1.436737
4   -1.413681
6    1.607920
8    1.024180
dtype: float64

In [38]: s1.iloc[:3]

0    1.130127
2   -1.436737
4   -1.413681
dtype: float64

In [39]: s1.iloc[3]
1.6079204745847746

Note that setting works as well:

In [40]: s1.iloc[:3] = 0

In [41]: s1

0    0.00000
2    0.00000
4    0.00000
6    1.60792
8    1.02418
dtype: float64

With a DataFrame

In [42]: df1 = DataFrame(np.random.randn(6,4),
   ....:                 index=range(0,12,2),
   ....:                 columns=range(0,8,2))
   ....: 

In [43]: df1

           0         2         4         6
0   0.569605  0.875906 -2.211372  0.974466
2  -2.006747 -0.410001 -0.078638  0.545952
4  -1.219217 -1.226825  0.769804 -1.281247
6  -0.727707 -0.121306 -0.097883  0.695775
8   0.341734  0.959726 -1.110336 -0.619976
10  0.149748 -0.732339  0.687738  0.176444

Select via integer slicing

In [44]: df1.iloc[:3]

          0         2         4         6
0  0.569605  0.875906 -2.211372  0.974466
2 -2.006747 -0.410001 -0.078638  0.545952
4 -1.219217 -1.226825  0.769804 -1.281247

In [45]: df1.iloc[1:5,2:4]

          4         6
2 -0.078638  0.545952
4  0.769804 -1.281247
6 -0.097883  0.695775
8 -1.110336 -0.619976

Select via integer list

In [46]: df1.iloc[[1,3,5],[1,3]]

           2         6
2  -0.410001  0.545952
6  -0.121306  0.695775
10 -0.732339  0.176444

For slicing rows explicitly (equiv to deprecated df.irow(slice(1,3))).

In [47]: df1.iloc[1:3,:]

          0         2         4         6
2 -2.006747 -0.410001 -0.078638  0.545952
4 -1.219217 -1.226825  0.769804 -1.281247

For slicing columns explicitly (equiv to deprecated df.icol(slice(1,3))).

In [48]: df1.iloc[:,1:3]

           2         4
0   0.875906 -2.211372
2  -0.410001 -0.078638
4  -1.226825  0.769804
6  -0.121306 -0.097883
8   0.959726 -1.110336
10 -0.732339  0.687738

For getting a scalar via integer position (equiv to deprecated df.get_value(1,1))

# this is also equivalent to ``df1.iat[1,1]``
In [49]: df1.iloc[1,1]
-0.41000056806065832

For getting a cross section using an integer position (equiv to df.xs(1))

In [50]: df1.iloc[1]

0   -2.006747
2   -0.410001
4   -0.078638
6    0.545952
Name: 2, dtype: float64

There is one signficant departure from standard python/numpy slicing semantics. python/numpy allow slicing past the end of an array without an associated error.

# these are allowed in python/numpy.
In [51]: x = list('abcdef')

In [52]: x[4:10]
['e', 'f']

In [53]: x[8:10]
[]

Pandas will detect this and raise IndexError, rather than return an empty structure.

>>> df.iloc[:,3:6]
IndexError: out-of-bounds on slice (end)

Fast scalar value getting and setting

Since indexing with [] must handle a lot of cases (single-label access, slicing, boolean indexing, etc.), it has a bit of overhead in order to figure out what you’re asking for. If you only want to access a scalar value, the fastest way is to use the at and iat methods, which are implemented on all of the data structures.

Similary to loc, at provides label based scalar lookups, while, iat provides integer based lookups analagously to iloc

In [54]: s.iat[5]
0.1136484096888855

In [55]: df.at[dates[5], 'A']
0.1136484096888855

In [56]: df.iat[3, 0]
-0.70677113363008448

You can also set using these same indexers. These have the additional capability of enlarging an object. This method always returns a reference to the object it modified, which in the case of enlargement, will be a new object:

In [57]: df.at[dates[5], 'E'] = 7

In [58]: df.iat[3, 0] = 7

Boolean indexing

Another common operation is the use of boolean vectors to filter the data. The operators are: | for or, & for and, and ~ for not. These must be grouped by using parentheses.

Using a boolean vector to index a Series works exactly as in a numpy ndarray:

In [59]: s[s > 0]

2000-01-04    7.000000
2000-01-05    0.567020
2000-01-06    0.113648
2000-01-07    0.577046
Freq: D, Name: A, dtype: float64

In [60]: s[(s < 0) & (s > -0.5)]

2000-01-01   -0.282863
2000-01-02   -0.173215
Freq: D, Name: A, dtype: float64

In [61]: s[(s < -1) | (s > 1 )]

2000-01-03   -2.104569
2000-01-04    7.000000
2000-01-08   -1.157892
Name: A, dtype: float64

In [62]: s[~(s < 0)]

2000-01-04    7.000000
2000-01-05    0.567020
2000-01-06    0.113648
2000-01-07    0.577046
Freq: D, Name: A, dtype: float64

You may select rows from a DataFrame using a boolean vector the same length as the DataFrame’s index (for example, something derived from one of the columns of the DataFrame):

In [63]: df[df['A'] > 0]

                   A         B         C         D
2000-01-04  7.000000  0.721555 -1.039575  0.271860
2000-01-05  0.567020 -0.424972  0.276232 -1.087401
2000-01-06  0.113648 -0.673690 -1.478427  0.524988
2000-01-07  0.577046  0.404705 -1.715002 -1.039268

Consider the isin method of Series, which returns a boolean vector that is true wherever the Series elements exist in the passed list. This allows you to select rows where one or more columns have values you want:

In [64]: df2 = DataFrame({'a' : ['one', 'one', 'two', 'three', 'two', 'one', 'six'],
   ....:                  'b' : ['x', 'y', 'y', 'x', 'y', 'x', 'x'],
   ....:                  'c' : randn(7)})
   ....: 

In [65]: df2[df2['a'].isin(['one', 'two'])]

     a  b         c
0  one  x  0.403310
1  one  y -0.154951
2  two  y  0.301624
4  two  y -1.369849
5  one  x -0.954208

List comprehensions and map method of Series can also be used to produce more complex criteria:

# only want 'two' or 'three'
In [66]: criterion = df2['a'].map(lambda x: x.startswith('t'))

In [67]: df2[criterion]

       a  b         c
2    two  y  0.301624
3  three  x -2.179861
4    two  y -1.369849

# equivalent but slower
In [68]: df2[[x.startswith('t') for x in df2['a']]]

       a  b         c
2    two  y  0.301624
3  three  x -2.179861
4    two  y -1.369849

# Multiple criteria
In [69]: df2[criterion & (df2['b'] == 'x')]

       a  b         c
3  three  x -2.179861

Note, with the choice methods Selection by Label, Selection by Position, and Advanced Indexing you may select along more than one axis using boolean vectors combined with other indexing expressions.

In [70]: df2.loc[criterion & (df2['b'] == 'x'),'b':'c']

   b         c
3  x -2.179861

Where and Masking

Selecting values from a Series with a boolean vector generally returns a subset of the data. To guarantee that selection output has the same shape as the original data, you can use the where method in Series and DataFrame.

To return only the selected rows

In [71]: s[s > 0]

2000-01-04    7.000000
2000-01-05    0.567020
2000-01-06    0.113648
2000-01-07    0.577046
Freq: D, Name: A, dtype: float64

To return a Series of the same shape as the original

In [72]: s.where(s > 0)

2000-01-01         NaN
2000-01-02         NaN
2000-01-03         NaN
2000-01-04    7.000000
2000-01-05    0.567020
2000-01-06    0.113648
2000-01-07    0.577046
2000-01-08         NaN
Freq: D, Name: A, dtype: float64

Selecting values from a DataFrame with a boolean critierion now also preserves input data shape. where is used under the hood as the implementation. Equivalent is df.where(df < 0)

In [73]: df[df < 0]

                   A         B         C         D
2000-01-01 -0.282863       NaN -1.509059 -1.135632
2000-01-02 -0.173215       NaN       NaN -1.044236
2000-01-03 -2.104569 -0.861849 -0.494929       NaN
2000-01-04       NaN       NaN -1.039575       NaN
2000-01-05       NaN -0.424972       NaN -1.087401
2000-01-06       NaN -0.673690 -1.478427       NaN
2000-01-07       NaN       NaN -1.715002 -1.039268
2000-01-08 -1.157892 -0.370647 -1.344312       NaN

In addition, where takes an optional other argument for replacement of values where the condition is False, in the returned copy.

In [74]: df.where(df < 0, -df)

                   A         B         C         D
2000-01-01 -0.282863 -0.469112 -1.509059 -1.135632
2000-01-02 -0.173215 -1.212112 -0.119209 -1.044236
2000-01-03 -2.104569 -0.861849 -0.494929 -1.071804
2000-01-04 -7.000000 -0.721555 -1.039575 -0.271860
2000-01-05 -0.567020 -0.424972 -0.276232 -1.087401
2000-01-06 -0.113648 -0.673690 -1.478427 -0.524988
2000-01-07 -0.577046 -0.404705 -1.715002 -1.039268
2000-01-08 -1.157892 -0.370647 -1.344312 -0.844885

You may wish to set values based on some boolean criteria. This can be done intuitively like so:

In [75]: s2 = s.copy()

In [76]: s2[s2 < 0] = 0

In [77]: s2

2000-01-01    0.000000
2000-01-02    0.000000
2000-01-03    0.000000
2000-01-04    7.000000
2000-01-05    0.567020
2000-01-06    0.113648
2000-01-07    0.577046
2000-01-08    0.000000
Freq: D, Name: A, dtype: float64

In [78]: df2 = df.copy()

In [79]: df2[df2 < 0] = 0

In [80]: df2

                   A         B         C         D
2000-01-01  0.000000  0.469112  0.000000  0.000000
2000-01-02  0.000000  1.212112  0.119209  0.000000
2000-01-03  0.000000  0.000000  0.000000  1.071804
2000-01-04  7.000000  0.721555  0.000000  0.271860
2000-01-05  0.567020  0.000000  0.276232  0.000000
2000-01-06  0.113648  0.000000  0.000000  0.524988
2000-01-07  0.577046  0.404705  0.000000  0.000000
2000-01-08  0.000000  0.000000  0.000000  0.844885

Furthermore, where aligns the input boolean condition (ndarray or DataFrame), such that partial selection with setting is possible. This is analagous to partial setting via .ix (but on the contents rather than the axis labels)

In [81]: df2 = df.copy()

In [82]: df2[ df2[1:4] > 0 ] = 3

In [83]: df2

                   A         B         C         D
2000-01-01 -0.282863  0.469112 -1.509059 -1.135632
2000-01-02 -0.173215  3.000000  3.000000 -1.044236
2000-01-03 -2.104569 -0.861849 -0.494929  3.000000
2000-01-04  3.000000  3.000000 -1.039575  3.000000
2000-01-05  0.567020 -0.424972  0.276232 -1.087401
2000-01-06  0.113648 -0.673690 -1.478427  0.524988
2000-01-07  0.577046  0.404705 -1.715002 -1.039268
2000-01-08 -1.157892 -0.370647 -1.344312  0.844885

By default, where returns a modified copy of the data. There is an optional parameter inplace so that the original data can be modified without creating a copy:

In [84]: df_orig = df.copy()

In [85]: df_orig.where(df > 0, -df, inplace=True);
In [85]: df_orig

                   A         B         C         D
2000-01-01  0.282863  0.469112  1.509059  1.135632
2000-01-02  0.173215  1.212112  0.119209  1.044236
2000-01-03  2.104569  0.861849  0.494929  1.071804
2000-01-04  7.000000  0.721555  1.039575  0.271860
2000-01-05  0.567020  0.424972  0.276232  1.087401
2000-01-06  0.113648  0.673690  1.478427  0.524988
2000-01-07  0.577046  0.404705  1.715002  1.039268
2000-01-08  1.157892  0.370647  1.344312  0.844885

mask is the inverse boolean operation of where.

In [86]: s.mask(s >= 0)

2000-01-01   -0.282863
2000-01-02   -0.173215
2000-01-03   -2.104569
2000-01-04         NaN
2000-01-05         NaN
2000-01-06         NaN
2000-01-07         NaN
2000-01-08   -1.157892
Freq: D, Name: A, dtype: float64

In [87]: df.mask(df >= 0)

                   A         B         C         D
2000-01-01 -0.282863       NaN -1.509059 -1.135632
2000-01-02 -0.173215       NaN       NaN -1.044236
2000-01-03 -2.104569 -0.861849 -0.494929       NaN
2000-01-04       NaN       NaN -1.039575       NaN
2000-01-05       NaN -0.424972       NaN -1.087401
2000-01-06       NaN -0.673690 -1.478427       NaN
2000-01-07       NaN       NaN -1.715002 -1.039268
2000-01-08 -1.157892 -0.370647 -1.344312       NaN

Take Methods

Similar to numpy ndarrays, pandas Index, Series, and DataFrame also provides the take method that retrieves elements along a given axis at the given indices. The given indices must be either a list or an ndarray of integer index positions. take will also accept negative integers as relative positions to the end of the object.

In [88]: index = Index(randint(0, 1000, 10))

In [89]: index
Int64Index([350, 634, 637, 430, 270, 333, 264, 738, 801, 829], dtype=int64)

In [90]: positions = [0, 9, 3]

In [91]: index[positions]
Int64Index([350, 829, 430], dtype=int64)

In [92]: index.take(positions)
Int64Index([350, 829, 430], dtype=int64)

In [93]: ser = Series(randn(10))

In [94]: ser.ix[positions]

0    0.007207
9   -1.623033
3    2.395985
dtype: float64

In [95]: ser.take(positions)

0    0.007207
9   -1.623033
3    2.395985
dtype: float64

For DataFrames, the given indices should be a 1d list or ndarray that specifies row or column positions.

In [96]: frm = DataFrame(randn(5, 3))

In [97]: frm.take([1, 4, 3])

          0         1         2
1 -0.087302 -1.575170  1.771208
4  1.074803  0.173520  0.211027
3  1.586976  0.019234  0.264294

In [98]: frm.take([0, 2], axis=1)

          0         2
0  0.029399  0.282696
1 -0.087302  1.771208
2  0.816482 -0.612665
3  1.586976  0.264294
4  1.074803  0.211027

It is important to note that the take method on pandas objects are not intended to work on boolean indices and may return unexpected results.

In [99]: arr = randn(10)

In [100]: arr.take([False, False, True, True])
array([ 1.3571,  1.3571,  1.4188,  1.4188])

In [101]: arr[[0, 1]]
array([ 1.3571,  1.4188])

In [102]: ser = Series(randn(10))

In [103]: ser.take([False, False, True, True])

0   -0.773723
0   -0.773723
1   -1.170653
1   -1.170653
dtype: float64

In [104]: ser.ix[[0, 1]]

0   -0.773723
1   -1.170653
dtype: float64

Finally, as a small note on performance, because the take method handles a narrower range of inputs, it can offer performance that is a good deal faster than fancy indexing.

Duplicate Data

If you want to identify and remove duplicate rows in a DataFrame, there are two methods that will help: duplicated and drop_duplicates. Each takes as an argument the columns to use to identify duplicated rows.

  • duplicated returns a boolean vector whose length is the number of rows, and which indicates whether a row is duplicated.
  • drop_duplicates removes duplicate rows.

By default, the first observed row of a duplicate set is considered unique, but each method has a take_last parameter that indicates the last observed row should be taken instead.

In [105]: df2 = DataFrame({'a' : ['one', 'one', 'two', 'three', 'two', 'one', 'six'],
   .....:                  'b' : ['x', 'y', 'y', 'x', 'y', 'x', 'x'],
   .....:                  'c' : np.random.randn(7)})
   .....: 

In [106]: df2.duplicated(['a','b'])

0    False
1    False
2    False
3    False
4     True
5     True
6    False
dtype: bool

In [107]: df2.drop_duplicates(['a','b'])

       a  b         c
0    one  x  1.024098
1    one  y -0.106062
2    two  y  1.824375
3  three  x  0.595974
6    six  x -1.237881

In [108]: df2.drop_duplicates(['a','b'], take_last=True)

       a  b         c
1    one  y -0.106062
3  three  x  0.595974
4    two  y  1.167115
5    one  x  0.601544
6    six  x -1.237881

Dictionary-like get method

Each of Series, DataFrame, and Panel have a get method which can return a default value.

In [109]: s = Series([1,2,3], index=['a','b','c'])

In [110]: s.get('a')               # equivalent to s['a']
1

In [111]: s.get('x', default=-1)
-1

Advanced Indexing with .ix

Note

The recent addition of .loc and .iloc have enabled users to be quite explicit about indexing choices. .ix allows a great flexibility to specify indexing locations by label and/or integer position. Pandas will attempt to use any passed integer as label locations first (like what .loc would do, then to fall back on positional indexing, like what .iloc would do). See Fallback Indexing for an example.

The syntax of using .ix is identical to .loc, in Selection by Label, and .iloc in Selection by Position.

The .ix attribute takes the following inputs:

  • An integer or single label, e.g. 5 or 'a'
  • A list or array of labels ['a', 'b', 'c'] or integers [4, 3, 0]
  • A slice object with ints 1:7 or labels 'a':'f'
  • A boolean array

We’ll illustrate all of these methods. First, note that this provides a concise way of reindexing on multiple axes at once:

In [112]: subindex = dates[[3,4,5]]

In [113]: df.reindex(index=subindex, columns=['C', 'B'])

                   C         B
2000-01-04 -1.039575  0.721555
2000-01-05  0.276232 -0.424972
2000-01-06 -1.478427 -0.673690

In [114]: df.ix[subindex, ['C', 'B']]

                   C         B
2000-01-04 -1.039575  0.721555
2000-01-05  0.276232 -0.424972
2000-01-06 -1.478427 -0.673690

Assignment / setting values is possible when using ix:

In [115]: df2 = df.copy()

In [116]: df2.ix[subindex, ['C', 'B']] = 0

In [117]: df2

                   A         B         C         D
2000-01-01 -0.282863  0.469112 -1.509059 -1.135632
2000-01-02 -0.173215  1.212112  0.119209 -1.044236
2000-01-03 -2.104569 -0.861849 -0.494929  1.071804
2000-01-04  7.000000  0.000000  0.000000  0.271860
2000-01-05  0.567020  0.000000  0.000000 -1.087401
2000-01-06  0.113648  0.000000  0.000000  0.524988
2000-01-07  0.577046  0.404705 -1.715002 -1.039268
2000-01-08 -1.157892 -0.370647 -1.344312  0.844885

Indexing with an array of integers can also be done:

In [118]: df.ix[[4,3,1]]

                   A         B         C         D
2000-01-05  0.567020 -0.424972  0.276232 -1.087401
2000-01-04  7.000000  0.721555 -1.039575  0.271860
2000-01-02 -0.173215  1.212112  0.119209 -1.044236

In [119]: df.ix[dates[[4,3,1]]]

                   A         B         C         D
2000-01-05  0.567020 -0.424972  0.276232 -1.087401
2000-01-04  7.000000  0.721555 -1.039575  0.271860
2000-01-02 -0.173215  1.212112  0.119209 -1.044236

Slicing has standard Python semantics for integer slices:

In [120]: df.ix[1:7, :2]

                   A         B
2000-01-02 -0.173215  1.212112
2000-01-03 -2.104569 -0.861849
2000-01-04  7.000000  0.721555
2000-01-05  0.567020 -0.424972
2000-01-06  0.113648 -0.673690
2000-01-07  0.577046  0.404705

Slicing with labels is semantically slightly different because the slice start and stop are inclusive in the label-based case:

In [121]: date1, date2 = dates[[2, 4]]

In [122]: print date1, date2
2000-01-03 00:00:00 2000-01-05 00:00:00

In [123]: df.ix[date1:date2]

                   A         B         C         D
2000-01-03 -2.104569 -0.861849 -0.494929  1.071804
2000-01-04  7.000000  0.721555 -1.039575  0.271860
2000-01-05  0.567020 -0.424972  0.276232 -1.087401

In [124]: df['A'].ix[date1:date2]

2000-01-03   -2.104569
2000-01-04    7.000000
2000-01-05    0.567020
Freq: D, Name: A, dtype: float64

Getting and setting rows in a DataFrame, especially by their location, is much easier:

In [125]: df2 = df[:5].copy()

In [126]: df2.ix[3]

A    7.000000
B    0.721555
C   -1.039575
D    0.271860
Name: 2000-01-04 00:00:00, dtype: float64

In [127]: df2.ix[3] = np.arange(len(df2.columns))

In [128]: df2

                   A         B         C         D
2000-01-01 -0.282863  0.469112 -1.509059 -1.135632
2000-01-02 -0.173215  1.212112  0.119209 -1.044236
2000-01-03 -2.104569 -0.861849 -0.494929  1.071804
2000-01-04  0.000000  1.000000  2.000000  3.000000
2000-01-05  0.567020 -0.424972  0.276232 -1.087401

Column or row selection can be combined as you would expect with arrays of labels or even boolean vectors:

In [129]: df.ix[df['A'] > 0, 'B']

2000-01-04    0.721555
2000-01-05   -0.424972
2000-01-06   -0.673690
2000-01-07    0.404705
Freq: D, Name: B, dtype: float64

In [130]: df.ix[date1:date2, 'B']

2000-01-03   -0.861849
2000-01-04    0.721555
2000-01-05   -0.424972
Freq: D, Name: B, dtype: float64

In [131]: df.ix[date1, 'B']
-0.86184896334779992

Slicing with labels is closely related to the truncate method which does precisely .ix[start:stop] but returns a copy (for legacy reasons).

The select method

Another way to extract slices from an object is with the select method of Series, DataFrame, and Panel. This method should be used only when there is no more direct way. select takes a function which operates on labels along axis and returns a boolean. For instance:

In [132]: df.select(lambda x: x == 'A', axis=1)

                   A
2000-01-01 -0.282863
2000-01-02 -0.173215
2000-01-03 -2.104569
2000-01-04  7.000000
2000-01-05  0.567020
2000-01-06  0.113648
2000-01-07  0.577046
2000-01-08 -1.157892

The lookup method

Sometimes you want to extract a set of values given a sequence of row labels and column labels, and the lookup method allows for this and returns a numpy array. For instance,

In [133]: dflookup = DataFrame(np.random.rand(20,4), columns = ['A','B','C','D'])

In [134]: dflookup.lookup(xrange(0,10,2), ['B','C','A','B','D'])
array([ 0.5277,  0.4201,  0.2442,  0.1239,  0.5722])

Setting values in mixed-type DataFrame

Setting values on a mixed-type DataFrame or Panel is supported when using scalar values, though setting arbitrary vectors is not yet supported:

In [135]: df2 = df[:4]

In [136]: df2['foo'] = 'bar'

In [137]: print df2
                   A         B         C         D  foo
2000-01-01 -0.282863  0.469112 -1.509059 -1.135632  bar
2000-01-02 -0.173215  1.212112  0.119209 -1.044236  bar
2000-01-03 -2.104569 -0.861849 -0.494929  1.071804  bar
2000-01-04  7.000000  0.721555 -1.039575  0.271860  bar

In [138]: df2.ix[2] = np.nan

In [139]: print df2
                   A         B         C         D  foo
2000-01-01 -0.282863  0.469112 -1.509059 -1.135632  bar
2000-01-02 -0.173215  1.212112  0.119209 -1.044236  bar
2000-01-03       NaN       NaN       NaN       NaN  NaN
2000-01-04  7.000000  0.721555 -1.039575  0.271860  bar

In [140]: print df2.dtypes
A      float64
B      float64
C      float64
D      float64
foo     object
dtype: object

Returning a view versus a copy

The rules about when a view on the data is returned are entirely dependent on NumPy. Whenever an array of labels or a boolean vector are involved in the indexing operation, the result will be a copy. With single label / scalar indexing and slicing, e.g. df.ix[3:6] or df.ix[:, 'A'], a view will be returned.

In chained expressions, the order may determine whether a copy is returned or not:

In [141]: dfb = DataFrame({'a' : ['one', 'one', 'two', 'three', 'two', 'one', 'six'],
   .....:                  'b' : ['x', 'y', 'y', 'x', 'y', 'x', 'x'],
   .....:                  'c' : randn(7)})
   .....: 

In [142]: dfb[dfb.a.str.startswith('o')]['c'] = 42  # goes to copy (will be lost)

In [143]: dfb['c'][dfb.a.str.startswith('o')] = 42  # passed via reference (will stay)

When assigning values to subsets of your data, thus, make sure to either use the pandas access methods or explicitly handle the assignment creating a copy.

Fallback indexing

Float indexes should be used only with caution. If you have a float indexed DataFrame and try to select using an integer, the row that Pandas returns might not be what you expect. Pandas first attempts to use the integer as a label location, but fails to find a match (because the types are not equal). Pandas then falls back to back to positional indexing.

In [144]: df = pd.DataFrame(np.random.randn(4,4),
   .....:     columns=list('ABCD'), index=[1.0, 2.0, 3.0, 4.0])
   .....: 

In [145]: df

          A         B         C         D
1 -0.823761  0.535420 -1.032853  1.469725
2  1.304124  1.449735  0.203109 -1.032011
3  0.969818 -0.962723  1.382083 -0.938794
4  0.669142 -0.433567 -0.273610  0.680433

In [146]: df.ix[1]

A    1.304124
B    1.449735
C    0.203109
D   -1.032011
Name: 2.0, dtype: float64

To select the row you do expect, instead use a float label or use iloc.

In [147]: df.ix[1.0]

A   -0.823761
B    0.535420
C   -1.032853
D    1.469725
Name: 1.0, dtype: float64

In [148]: df.iloc[0]

A   -0.823761
B    0.535420
C   -1.032853
D    1.469725
Name: 1.0, dtype: float64

Instead of using a float index, it is often better to convert to an integer index:

In [149]: df_new = df.reset_index()

In [150]: df_new[df_new['index'] == 1.0]

   index         A        B         C         D
0      1 -0.823761  0.53542 -1.032853  1.469725

# now you can also do "float selection"
In [151]: df_new[(df_new['index'] >= 1.0) & (df_new['index'] < 2)]

   index         A        B         C         D
0      1 -0.823761  0.53542 -1.032853  1.469725

Index objects

The pandas Index class and its subclasses can be viewed as implementing an ordered set in addition to providing the support infrastructure necessary for lookups, data alignment, and reindexing. The easiest way to create one directly is to pass a list or other sequence to Index:

In [152]: index = Index(['e', 'd', 'a', 'b'])

In [153]: index
Index([u'e', u'd', u'a', u'b'], dtype=object)

In [154]: 'd' in index
True

You can also pass a name to be stored in the index:

In [155]: index = Index(['e', 'd', 'a', 'b'], name='something')

In [156]: index.name
'something'

Starting with pandas 0.5, the name, if set, will be shown in the console display:

In [157]: index = Index(range(5), name='rows')

In [158]: columns = Index(['A', 'B', 'C'], name='cols')

In [159]: df = DataFrame(np.random.randn(5, 3), index=index, columns=columns)

In [160]: df

cols         A         B         C
rows                              
0    -0.308450 -0.276099 -1.821168
1    -1.993606 -1.927385 -2.027924
2     1.624972  0.551135  3.059267
3     0.455264 -0.030740  0.935716
4     1.061192 -2.107852  0.199905

In [161]: df['A']

rows
0      -0.308450
1      -1.993606
2       1.624972
3       0.455264
4       1.061192
Name: A, dtype: float64

Set operations on Index objects

The three main operations are union (|), intersection (&), and diff (-). These can be directly called as instance methods or used via overloaded operators:

In [162]: a = Index(['c', 'b', 'a'])

In [163]: b = Index(['c', 'e', 'd'])

In [164]: a.union(b)
Index([u'a', u'b', u'c', u'd', u'e'], dtype=object)

In [165]: a | b
Index([u'a', u'b', u'c', u'd', u'e'], dtype=object)

In [166]: a & b
Index([u'c'], dtype=object)

In [167]: a - b
Index([u'a', u'b'], dtype=object)

isin method of Index objects

One additional operation is the isin method that works analogously to the Series.isin method found here.

Hierarchical indexing (MultiIndex)

Hierarchical indexing (also referred to as “multi-level” indexing) is brand new in the pandas 0.4 release. It is very exciting as it opens the door to some quite sophisticated data analysis and manipulation, especially for working with higher dimensional data. In essence, it enables you to store and manipulate data with an arbitrary number of dimensions in lower dimensional data structures like Series (1d) and DataFrame (2d).

In this section, we will show what exactly we mean by “hierarchical” indexing and how it integrates with the all of the pandas indexing functionality described above and in prior sections. Later, when discussing group by and pivoting and reshaping data, we’ll show non-trivial applications to illustrate how it aids in structuring data for analysis.

See the cookbook for some advanced strategies

Note

Given that hierarchical indexing is so new to the library, it is definitely “bleeding-edge” functionality but is certainly suitable for production. But, there may inevitably be some minor API changes as more use cases are explored and any weaknesses in the design / implementation are identified. pandas aims to be “eminently usable” so any feedback about new functionality like this is extremely helpful.

Creating a MultiIndex (hierarchical index) object

The MultiIndex object is the hierarchical analogue of the standard Index object which typically stores the axis labels in pandas objects. You can think of MultiIndex an array of tuples where each tuple is unique. A MultiIndex can be created from a list of arrays (using MultiIndex.from_arrays) or an array of tuples (using MultiIndex.from_tuples).

In [168]: arrays = [['bar', 'bar', 'baz', 'baz', 'foo', 'foo', 'qux', 'qux'],
   .....:           ['one', 'two', 'one', 'two', 'one', 'two', 'one', 'two']]
   .....: 

In [169]: tuples = zip(*arrays)

In [170]: tuples

[('bar', 'one'),
 ('bar', 'two'),
 ('baz', 'one'),
 ('baz', 'two'),
 ('foo', 'one'),
 ('foo', 'two'),
 ('qux', 'one'),
 ('qux', 'two')]

In [171]: index = MultiIndex.from_tuples(tuples, names=['first', 'second'])

In [172]: s = Series(randn(8), index=index)

In [173]: s

first  second
bar    one       0.323586
       two      -0.641630
baz    one      -0.587514
       two       0.053897
foo    one       0.194889
       two      -0.381994
qux    one       0.318587
       two       2.089075
dtype: float64

As a convenience, you can pass a list of arrays directly into Series or DataFrame to construct a MultiIndex automatically:

In [174]: arrays = [np.array(['bar', 'bar', 'baz', 'baz', 'foo', 'foo', 'qux', 'qux'])
   .....: ,
   .....:           np.array(['one', 'two', 'one', 'two', 'one', 'two', 'one', 'two'])
   .....:           ]
   .....: 

In [175]: s = Series(randn(8), index=arrays)

In [176]: s

bar  one   -0.728293
     two   -0.090255
baz  one   -0.748199
     two    1.318931
foo  one   -2.029766
     two    0.792652
qux  one    0.461007
     two   -0.542749
dtype: float64

In [177]: df = DataFrame(randn(8, 4), index=arrays)

In [178]: df

                0         1         2         3
bar one -0.305384 -0.479195  0.095031 -0.270099
    two -0.707140 -0.773882  0.229453  0.304418
baz one  0.736135 -0.859631 -0.424100 -0.776114
    two  1.279293  0.943798 -1.001859  0.306546
foo one  0.307453 -0.906534 -1.505397  1.392009
    two -0.027793 -0.631023 -0.662357  2.725042
qux one -1.847240 -0.529247  0.614656 -1.590742
    two -0.156479 -1.696377  0.819712 -2.107728

All of the MultiIndex constructors accept a names argument which stores string names for the levels themselves. If no names are provided, None will be assigned:

In [179]: df.index.names
[None, None]

This index can back any axis of a pandas object, and the number of levels of the index is up to you:

In [180]: df = DataFrame(randn(3, 8), index=['A', 'B', 'C'], columns=index)

In [181]: df

first        bar                 baz                 foo                 qux  \
second       one       two       one       two       one       two       one   
A      -0.488326  0.851918 -1.242101 -0.654708 -1.647369  0.828258 -0.352362   
B       0.289685 -1.982371  0.840166 -0.411403 -2.049028  2.846612 -1.208049   
C       2.423905  0.121108  0.266916  0.843826 -0.222540  2.021981 -0.716789   
first             
second       two  
A      -0.814324  
B      -0.450392  
C      -2.224485  

In [182]: DataFrame(randn(6, 6), index=index[:6], columns=index[:6])

first              bar                 baz                 foo          
second             one       two       one       two       one       two
first second                                                            
bar   one    -1.061137 -0.232825  0.430793 -0.665478  1.829807 -1.406509
      two     1.078248  0.322774  0.200324  0.890024  0.194813  0.351633
baz   one     0.448881 -0.197915  0.965714 -1.522909 -0.116619  0.295575
      two    -1.047704  1.640556  1.905836  2.772115  0.088787 -1.144197
foo   one    -0.633372  0.925372 -0.006438 -0.820408 -0.600874 -1.039266
      two     0.824758 -0.824095 -0.337730 -0.927764 -0.840123  0.248505

We’ve “sparsified” the higher levels of the indexes to make the console output a bit easier on the eyes.

It’s worth keeping in mind that there’s nothing preventing you from using tuples as atomic labels on an axis:

In [183]: Series(randn(8), index=tuples)

(bar, one)   -0.109250
(bar, two)    0.431977
(baz, one)   -0.460710
(baz, two)    0.336505
(foo, one)   -3.207595
(foo, two)   -1.535854
(qux, one)    0.409769
(qux, two)   -0.673145
dtype: float64

The reason that the MultiIndex matters is that it can allow you to do grouping, selection, and reshaping operations as we will describe below and in subsequent areas of the documentation. As you will see in later sections, you can find yourself working with hierarchically-indexed data without creating a MultiIndex explicitly yourself. However, when loading data from a file, you may wish to generate your own MultiIndex when preparing the data set.

Note that how the index is displayed by be controlled using the multi_sparse option in pandas.set_printoptions:

In [184]: pd.set_option('display.multi_sparse', False)

In [185]: df

first        bar       bar       baz       baz       foo       foo       qux  \
second       one       two       one       two       one       two       one   
A      -0.488326  0.851918 -1.242101 -0.654708 -1.647369  0.828258 -0.352362   
B       0.289685 -1.982371  0.840166 -0.411403 -2.049028  2.846612 -1.208049   
C       2.423905  0.121108  0.266916  0.843826 -0.222540  2.021981 -0.716789   
first        qux  
second       two  
A      -0.814324  
B      -0.450392  
C      -2.224485  

In [186]: pd.set_option('display.multi_sparse', True)

Reconstructing the level labels

The method get_level_values will return a vector of the labels for each location at a particular level:

In [187]: index.get_level_values(0)
Index([u'bar', u'bar', u'baz', u'baz', u'foo', u'foo', u'qux', u'qux'], dtype=object)

In [188]: index.get_level_values('second')
Index([u'one', u'two', u'one', u'two', u'one', u'two', u'one', u'two'], dtype=object)

Basic indexing on axis with MultiIndex

One of the important features of hierarchical indexing is that you can select data by a “partial” label identifying a subgroup in the data. Partial selection “drops” levels of the hierarchical index in the result in a completely analogous way to selecting a column in a regular DataFrame:

In [189]: df['bar']

second       one       two
A      -0.488326  0.851918
B       0.289685 -1.982371
C       2.423905  0.121108

In [190]: df['bar', 'one']

A   -0.488326
B    0.289685
C    2.423905
Name: (bar, one), dtype: float64

In [191]: df['bar']['one']

A   -0.488326
B    0.289685
C    2.423905
Name: one, dtype: float64

In [192]: s['qux']

one    0.461007
two   -0.542749
dtype: float64

Data alignment and using reindex

Operations between differently-indexed objects having MultiIndex on the axes will work as you expect; data alignment will work the same as an Index of tuples:

In [193]: s + s[:-2]

bar  one   -1.456587
     two   -0.180509
baz  one   -1.496398
     two    2.637862
foo  one   -4.059533
     two    1.585304
qux  one         NaN
     two         NaN
dtype: float64

In [194]: s + s[::2]

bar  one   -1.456587
     two         NaN
baz  one   -1.496398
     two         NaN
foo  one   -4.059533
     two         NaN
qux  one    0.922013
     two         NaN
dtype: float64

reindex can be called with another MultiIndex or even a list or array of tuples:

In [195]: s.reindex(index[:3])

first  second
bar    one      -0.728293
       two      -0.090255
baz    one      -0.748199
dtype: float64

In [196]: s.reindex([('foo', 'two'), ('bar', 'one'), ('qux', 'one'), ('baz', 'one')])

foo  two    0.792652
bar  one   -0.728293
qux  one    0.461007
baz  one   -0.748199
dtype: float64

Advanced indexing with hierarchical index

Syntactically integrating MultiIndex in advanced indexing with .ix is a bit challenging, but we’ve made every effort to do so. for example the following works as you would expect:

In [197]: df = df.T

In [198]: df

                     A         B         C
first second                              
bar   one    -0.488326  0.289685  2.423905
      two     0.851918 -1.982371  0.121108
baz   one    -1.242101  0.840166  0.266916
      two    -0.654708 -0.411403  0.843826
foo   one    -1.647369 -2.049028 -0.222540
      two     0.828258  2.846612  2.021981
qux   one    -0.352362 -1.208049 -0.716789
      two    -0.814324 -0.450392 -2.224485

In [199]: df.ix['bar']

               A         B         C
second                              
one    -0.488326  0.289685  2.423905
two     0.851918 -1.982371  0.121108

In [200]: df.ix['bar', 'two']

A    0.851918
B   -1.982371
C    0.121108
Name: (bar, two), dtype: float64

“Partial” slicing also works quite nicely:

In [201]: df.ix['baz':'foo']

                     A         B         C
first second                              
baz   one    -1.242101  0.840166  0.266916
      two    -0.654708 -0.411403  0.843826
foo   one    -1.647369 -2.049028 -0.222540
      two     0.828258  2.846612  2.021981

In [202]: df.ix[('baz', 'two'):('qux', 'one')]

                     A         B         C
first second                              
baz   two    -0.654708 -0.411403  0.843826
foo   one    -1.647369 -2.049028 -0.222540
      two     0.828258  2.846612  2.021981
qux   one    -0.352362 -1.208049 -0.716789

In [203]: df.ix[('baz', 'two'):'foo']

                     A         B         C
first second                              
baz   two    -0.654708 -0.411403  0.843826
foo   one    -1.647369 -2.049028 -0.222540
      two     0.828258  2.846612  2.021981

Passing a list of labels or tuples works similar to reindexing:

In [204]: df.ix[[('bar', 'two'), ('qux', 'one')]]

                     A         B         C
first second                              
bar   two     0.851918 -1.982371  0.121108
qux   one    -0.352362 -1.208049 -0.716789

The following does not work, and it’s not clear if it should or not:

>>> df.ix[['bar', 'qux']]

The code for implementing .ix makes every attempt to “do the right thing” but as you use it you may uncover corner cases or unintuitive behavior. If you do find something like this, do not hesitate to report the issue or ask on the mailing list.

Cross-section with hierarchical index

The xs method of DataFrame additionally takes a level argument to make selecting data at a particular level of a MultiIndex easier.

In [205]: df.xs('one', level='second')

              A         B         C
first                              
bar   -0.488326  0.289685  2.423905
baz   -1.242101  0.840166  0.266916
foo   -1.647369 -2.049028 -0.222540
qux   -0.352362 -1.208049 -0.716789

Advanced reindexing and alignment with hierarchical index

The parameter level has been added to the reindex and align methods of pandas objects. This is useful to broadcast values across a level. For instance:

In [206]: midx = MultiIndex(levels=[['zero', 'one'], ['x','y']],
   .....:                   labels=[[1,1,0,0],[1,0,1,0]])
   .....: 

In [207]: df = DataFrame(randn(4,2), index=midx)

In [208]: print df
               0         1
one  y -0.741113 -0.110891
     x -2.672910  0.864492
zero y  0.060868  0.933092
     x  0.288841  1.324969

In [209]: df2 = df.mean(level=0)

In [210]: print df2
             0        1
zero  0.174854  1.12903
one  -1.707011  0.37680

In [211]: print df2.reindex(df.index, level=0)
               0        1
one  y -1.707011  0.37680
     x -1.707011  0.37680
zero y  0.174854  1.12903
     x  0.174854  1.12903

In [212]: df_aligned, df2_aligned = df.align(df2, level=0)

In [213]: print df_aligned
               0         1
one  y -0.741113 -0.110891
     x -2.672910  0.864492
zero y  0.060868  0.933092
     x  0.288841  1.324969

In [214]: print df2_aligned
               0        1
one  y -1.707011  0.37680
     x -1.707011  0.37680
zero y  0.174854  1.12903
     x  0.174854  1.12903

The need for sortedness

Caveat emptor: the present implementation of MultiIndex requires that the labels be sorted for some of the slicing / indexing routines to work correctly. You can think about breaking the axis into unique groups, where at the hierarchical level of interest, each distinct group shares a label, but no two have the same label. However, the MultiIndex does not enforce this: you are responsible for ensuring that things are properly sorted. There is an important new method sortlevel to sort an axis within a MultiIndex so that its labels are grouped and sorted by the original ordering of the associated factor at that level. Note that this does not necessarily mean the labels will be sorted lexicographically!

In [215]: import random; random.shuffle(tuples)

In [216]: s = Series(randn(8), index=MultiIndex.from_tuples(tuples))

In [217]: s

bar  one    0.589220
foo  two    0.531415
bar  two   -1.198747
foo  one   -0.236866
qux  one   -1.317798
baz  two    0.373766
     one   -0.675588
qux  two    0.981295
dtype: float64

In [218]: s.sortlevel(0)

bar  one    0.589220
     two   -1.198747
baz  one   -0.675588
     two    0.373766
foo  one   -0.236866
     two    0.531415
qux  one   -1.317798
     two    0.981295
dtype: float64

In [219]: s.sortlevel(1)

bar  one    0.589220
baz  one   -0.675588
foo  one   -0.236866
qux  one   -1.317798
bar  two   -1.198747
baz  two    0.373766
foo  two    0.531415
qux  two    0.981295
dtype: float64

Note, you may also pass a level name to sortlevel if the MultiIndex levels are named.

In [220]: s.index.names = ['L1', 'L2']

In [221]: s.sortlevel(level='L1')

L1   L2 
bar  one    0.589220
     two   -1.198747
baz  one   -0.675588
     two    0.373766
foo  one   -0.236866
     two    0.531415
qux  one   -1.317798
     two    0.981295
dtype: float64

In [222]: s.sortlevel(level='L2')

L1   L2 
bar  one    0.589220
baz  one   -0.675588
foo  one   -0.236866
qux  one   -1.317798
bar  two   -1.198747
baz  two    0.373766
foo  two    0.531415
qux  two    0.981295
dtype: float64

Some indexing will work even if the data are not sorted, but will be rather inefficient and will also return a copy of the data rather than a view:

In [223]: s['qux']

L2
one   -1.317798
two    0.981295
dtype: float64

In [224]: s.sortlevel(1)['qux']

L2
one   -1.317798
two    0.981295
dtype: float64

On higher dimensional objects, you can sort any of the other axes by level if they have a MultiIndex:

In [225]: df.T.sortlevel(1, axis=1)

       zero       one      zero       one
          x         x         y         y
0  0.288841 -2.672910  0.060868 -0.741113
1  1.324969  0.864492  0.933092 -0.110891

The MultiIndex object has code to explicity check the sort depth. Thus, if you try to index at a depth at which the index is not sorted, it will raise an exception. Here is a concrete example to illustrate this:

In [226]: tuples = [('a', 'a'), ('a', 'b'), ('b', 'a'), ('b', 'b')]

In [227]: idx = MultiIndex.from_tuples(tuples)

In [228]: idx.lexsort_depth
2

In [229]: reordered = idx[[1, 0, 3, 2]]

In [230]: reordered.lexsort_depth
1

In [231]: s = Series(randn(4), index=reordered)

In [232]: s.ix['a':'a']

a  b   -0.100323
   a    0.935523
dtype: float64

However:

>>> s.ix[('a', 'b'):('b', 'a')]
Exception: MultiIndex lexsort depth 1, key was length 2

Swapping levels with swaplevel

The swaplevel function can switch the order of two levels:

In [233]: df[:5]

               0         1
one  y -0.741113 -0.110891
     x -2.672910  0.864492
zero y  0.060868  0.933092
     x  0.288841  1.324969

In [234]: df[:5].swaplevel(0, 1, axis=0)

               0         1
y one  -0.741113 -0.110891
x one  -2.672910  0.864492
y zero  0.060868  0.933092
x zero  0.288841  1.324969

Reordering levels with reorder_levels

The reorder_levels function generalizes the swaplevel function, allowing you to permute the hierarchical index levels in one step:

In [235]: df[:5].reorder_levels([1,0], axis=0)

               0         1
y one  -0.741113 -0.110891
x one  -2.672910  0.864492
y zero  0.060868  0.933092
x zero  0.288841  1.324969

Some gory internal details

Internally, the MultiIndex consists of a few things: the levels, the integer labels, and the level names:

In [236]: index

MultiIndex
[(u'bar', u'one'), (u'bar', u'two'), (u'baz', u'one'), (u'baz', u'two'), (u'foo', u'one'), (u'foo', u'two'), (u'qux', u'one'), (u'qux', u'two')]

In [237]: index.levels

[Index([u'bar', u'baz', u'foo', u'qux'], dtype=object),
 Index([u'one', u'two'], dtype=object)]

In [238]: index.labels
[array([0, 0, 1, 1, 2, 2, 3, 3]), array([0, 1, 0, 1, 0, 1, 0, 1])]

In [239]: index.names
['first', 'second']

You can probably guess that the labels determine which unique element is identified with that location at each layer of the index. It’s important to note that sortedness is determined solely from the integer labels and does not check (or care) whether the levels themselves are sorted. Fortunately, the constructors from_tuples and from_arrays ensure that this is true, but if you compute the levels and labels yourself, please be careful.

Adding an index to an existing DataFrame

Occasionally you will load or create a data set into a DataFrame and want to add an index after you’ve already done so. There are a couple of different ways.

Add an index using DataFrame columns

DataFrame has a set_index method which takes a column name (for a regular Index) or a list of column names (for a MultiIndex), to create a new, indexed DataFrame:

In [240]: data

     a    b  c  d
0  bar  one  z  1
1  bar  two  y  2
2  foo  one  x  3
3  foo  two  w  4

In [241]: indexed1 = data.set_index('c')

In [242]: indexed1

     a    b  d
c             
z  bar  one  1
y  bar  two  2
x  foo  one  3
w  foo  two  4

In [243]: indexed2 = data.set_index(['a', 'b'])

In [244]: indexed2

         c  d
a   b        
bar one  z  1
    two  y  2
foo one  x  3
    two  w  4

The append keyword option allow you to keep the existing index and append the given columns to a MultiIndex:

In [245]: frame = data.set_index('c', drop=False)

In [246]: frame = frame.set_index(['a', 'b'], append=True)

In [247]: frame

           c  d
c a   b        
z bar one  z  1
y bar two  y  2
x foo one  x  3
w foo two  w  4

Other options in set_index allow you not drop the index columns or to add the index in-place (without creating a new object):

In [248]: data.set_index('c', drop=False)

     a    b  c  d
c                
z  bar  one  z  1
y  bar  two  y  2
x  foo  one  x  3
w  foo  two  w  4

In [249]: data.set_index(['a', 'b'], inplace=True)

In [250]: data

         c  d
a   b        
bar one  z  1
    two  y  2
foo one  x  3
    two  w  4

Remove / reset the index, reset_index

As a convenience, there is a new function on DataFrame called reset_index which transfers the index values into the DataFrame’s columns and sets a simple integer index. This is the inverse operation to set_index

In [251]: data

         c  d
a   b        
bar one  z  1
    two  y  2
foo one  x  3
    two  w  4

In [252]: data.reset_index()

     a    b  c  d
0  bar  one  z  1
1  bar  two  y  2
2  foo  one  x  3
3  foo  two  w  4

The output is more similar to a SQL table or a record array. The names for the columns derived from the index are the ones stored in the names attribute.

You can use the level keyword to remove only a portion of the index:

In [253]: frame

           c  d
c a   b        
z bar one  z  1
y bar two  y  2
x foo one  x  3
w foo two  w  4

In [254]: frame.reset_index(level=1)

         a  c  d
c b             
z one  bar  z  1
y two  bar  y  2
x one  foo  x  3
w two  foo  w  4

reset_index takes an optional parameter drop which if true simply discards the index, instead of putting index values in the DataFrame’s columns.

Note

The reset_index method used to be called delevel which is now deprecated.

Adding an ad hoc index

If you create an index yourself, you can just assign it to the index field:

data.index = index

Indexing internal details

Note

The following is largely relevant for those actually working on the pandas codebase. And the source code is still the best place to look at the specifics of how things are implemented.

In pandas there are a few objects implemented which can serve as valid containers for the axis labels:

  • Index: the generic “ordered set” object, an ndarray of object dtype assuming nothing about its contents. The labels must be hashable (and likely immutable) and unique. Populates a dict of label to location in Cython to do O(1) lookups.
  • Int64Index: a version of Index highly optimized for 64-bit integer data, such as time stamps
  • MultiIndex: the standard hierarchical index object
  • date_range: fixed frequency date range generated from a time rule or DateOffset. An ndarray of Python datetime objects

The motivation for having an Index class in the first place was to enable different implementations of indexing. This means that it’s possible for you, the user, to implement a custom Index subclass that may be better suited to a particular application than the ones provided in pandas.

From an internal implementation point of view, the relevant methods that an Index must define are one or more of the following (depending on how incompatible the new object internals are with the Index functions):

  • get_loc: returns an “indexer” (an integer, or in some cases a slice object) for a label
  • slice_locs: returns the “range” to slice between two labels
  • get_indexer: Computes the indexing vector for reindexing / data alignment purposes. See the source / docstrings for more on this
  • get_indexer_non_unique: Computes the indexing vector for reindexing / data alignment purposes when the index is non-unique. See the source / docstrings for more on this
  • reindex: Does any pre-conversion of the input index then calls get_indexer
  • union, intersection: computes the union or intersection of two Index objects
  • insert: Inserts a new label into an Index, yielding a new object
  • delete: Delete a label, yielding a new object
  • drop: Deletes a set of labels
  • take: Analogous to ndarray.take