Working with Text Data¶
Series and Index are equipped with a set of string processing methods that make it easy to operate on each element of the array. Perhaps most importantly, these methods exclude missing/NA values automatically. These are accessed via the str attribute and generally have names matching the equivalent (scalar) built-in string methods:
In [1]: s = pd.Series(['A', 'B', 'C', 'Aaba', 'Baca', np.nan, 'CABA', 'dog', 'cat'])
In [2]: s.str.lower()
Out[2]:
0 a
1 b
2 c
3 aaba
4 baca
5 NaN
6 caba
7 dog
8 cat
dtype: object
In [3]: s.str.upper()
Out[3]:
0 A
1 B
2 C
3 AABA
4 BACA
5 NaN
6 CABA
7 DOG
8 CAT
dtype: object
In [4]: s.str.len()
Out[4]:
0 1
1 1
2 1
3 4
4 4
5 NaN
6 4
7 3
8 3
dtype: float64
In [5]: idx = pd.Index([' jack', 'jill ', ' jesse ', 'frank'])
In [6]: idx.str.strip()
Out[6]: Index([u'jack', u'jill', u'jesse', u'frank'], dtype='object')
In [7]: idx.str.lstrip()
Out[7]: Index([u'jack', u'jill ', u'jesse ', u'frank'], dtype='object')
In [8]: idx.str.rstrip()
Out[8]: Index([u' jack', u'jill', u' jesse', u'frank'], dtype='object')
The string methods on Index are especially useful for cleaning up or transforming DataFrame columns. For instance, you may have columns with leading or trailing whitespace:
In [9]: df = pd.DataFrame(randn(3, 2), columns=[' Column A ', ' Column B '],
...: index=range(3))
...:
In [10]: df
Out[10]:
Column A Column B
0 0.017428 0.039049
1 -2.240248 0.847859
2 -1.342107 0.368828
Since df.columns is an Index object, we can use the .str accessor
In [11]: df.columns.str.strip()
Out[11]: Index([u'Column A', u'Column B'], dtype='object')
In [12]: df.columns.str.lower()
Out[12]: Index([u' column a ', u' column b '], dtype='object')
These string methods can then be used to clean up the columns as needed. Here we are removing leading and trailing whitespaces, lowercasing all names, and replacing any remaining whitespaces with underscores:
In [13]: df.columns = df.columns.str.strip().str.lower().str.replace(' ', '_')
In [14]: df
Out[14]:
column_a column_b
0 0.017428 0.039049
1 -2.240248 0.847859
2 -1.342107 0.368828
Splitting and Replacing Strings¶
Methods like split return a Series of lists:
In [15]: s2 = pd.Series(['a_b_c', 'c_d_e', np.nan, 'f_g_h'])
In [16]: s2.str.split('_')
Out[16]:
0 [a, b, c]
1 [c, d, e]
2 NaN
3 [f, g, h]
dtype: object
Elements in the split lists can be accessed using get or [] notation:
In [17]: s2.str.split('_').str.get(1)
Out[17]:
0 b
1 d
2 NaN
3 g
dtype: object
In [18]: s2.str.split('_').str[1]
Out[18]:
0 b
1 d
2 NaN
3 g
dtype: object
Easy to expand this to return a DataFrame using expand.
In [19]: s2.str.split('_', expand=True)
Out[19]:
0 1 2
0 a b c
1 c d e
2 NaN None None
3 f g h
It is also possible to limit the number of splits:
In [20]: s2.str.split('_', expand=True, n=1)
Out[20]:
0 1
0 a b_c
1 c d_e
2 NaN None
3 f g_h
rsplit is similar to split except it works in the reverse direction, i.e., from the end of the string to the beginning of the string:
In [21]: s2.str.rsplit('_', expand=True, n=1)
Out[21]:
0 1
0 a_b c
1 c_d e
2 NaN None
3 f_g h
Methods like replace and findall take regular expressions, too:
In [22]: s3 = pd.Series(['A', 'B', 'C', 'Aaba', 'Baca',
....: '', np.nan, 'CABA', 'dog', 'cat'])
....:
In [23]: s3
Out[23]:
0 A
1 B
2 C
3 Aaba
4 Baca
5
6 NaN
7 CABA
8 dog
9 cat
dtype: object
In [24]: s3.str.replace('^.a|dog', 'XX-XX ', case=False)
Out[24]:
0 A
1 B
2 C
3 XX-XX ba
4 XX-XX ca
5
6 NaN
7 XX-XX BA
8 XX-XX
9 XX-XX t
dtype: object
Some caution must be taken to keep regular expressions in mind! For example, the following code will cause trouble because of the regular expression meaning of $:
# Consider the following badly formatted financial data
In [25]: dollars = pd.Series(['12', '-$10', '$10,000'])
# This does what you'd naively expect:
In [26]: dollars.str.replace('$', '')
Out[26]:
0 12
1 -10
2 10,000
dtype: object
# But this doesn't:
In [27]: dollars.str.replace('-$', '-')
Out[27]:
0 12
1 -$10
2 $10,000
dtype: object
# We need to escape the special character (for >1 len patterns)
In [28]: dollars.str.replace(r'-\$', '-')
Out[28]:
0 12
1 -10
2 $10,000
dtype: object
Indexing with .str¶
You can use [] notation to directly index by position locations. If you index past the end of the string, the result will be a NaN.
In [29]: s = pd.Series(['A', 'B', 'C', 'Aaba', 'Baca', np.nan,
....: 'CABA', 'dog', 'cat'])
....:
In [30]: s.str[0]
Out[30]:
0 A
1 B
2 C
3 A
4 B
5 NaN
6 C
7 d
8 c
dtype: object
In [31]: s.str[1]
Out[31]:
0 NaN
1 NaN
2 NaN
3 a
4 a
5 NaN
6 A
7 o
8 a
dtype: object
Extracting Substrings¶
The method extract (introduced in version 0.13) accepts regular expressions with match groups. Extracting a regular expression with one group returns a Series of strings.
In [32]: pd.Series(['a1', 'b2', 'c3']).str.extract('[ab](\d)')
Out[32]:
0 1
1 2
2 NaN
dtype: object
Elements that do not match return NaN. Extracting a regular expression with more than one group returns a DataFrame with one column per group.
In [33]: pd.Series(['a1', 'b2', 'c3']).str.extract('([ab])(\d)')
Out[33]:
0 1
0 a 1
1 b 2
2 NaN NaN
Elements that do not match return a row filled with NaN. Thus, a Series of messy strings can be “converted” into a like-indexed Series or DataFrame of cleaned-up or more useful strings, without necessitating get() to access tuples or re.match objects.
The results dtype always is object, even if no match is found and the result only contains NaN.
Named groups like
In [34]: pd.Series(['a1', 'b2', 'c3']).str.extract('(?P<letter>[ab])(?P<digit>\d)')
Out[34]:
letter digit
0 a 1
1 b 2
2 NaN NaN
and optional groups like
In [35]: pd.Series(['a1', 'b2', '3']).str.extract('(?P<letter>[ab])?(?P<digit>\d)')
Out[35]:
letter digit
0 a 1
1 b 2
2 NaN 3
can also be used.
Testing for Strings that Match or Contain a Pattern¶
You can check whether elements contain a pattern:
In [36]: pattern = r'[a-z][0-9]'
In [37]: pd.Series(['1', '2', '3a', '3b', '03c']).str.contains(pattern)
Out[37]:
0 False
1 False
2 False
3 False
4 False
dtype: bool
or match a pattern:
In [38]: pd.Series(['1', '2', '3a', '3b', '03c']).str.match(pattern, as_indexer=True)
Out[38]:
0 False
1 False
2 False
3 False
4 False
dtype: bool
The distinction between match and contains is strictness: match relies on strict re.match, while contains relies on re.search.
Warning
In previous versions, match was for extracting groups, returning a not-so-convenient Series of tuples. The new method extract (described in the previous section) is now preferred.
This old, deprecated behavior of match is still the default. As demonstrated above, use the new behavior by setting as_indexer=True. In this mode, match is analogous to contains, returning a boolean Series. The new behavior will become the default behavior in a future release.
- Methods like match, contains, startswith, and endswith take
- an extra na argument so missing values can be considered True or False:
In [39]: s4 = pd.Series(['A', 'B', 'C', 'Aaba', 'Baca', np.nan, 'CABA', 'dog', 'cat'])
In [40]: s4.str.contains('A', na=False)
Out[40]:
0 True
1 False
2 False
3 True
4 False
5 False
6 True
7 False
8 False
dtype: bool
Creating Indicator Variables¶
You can extract dummy variables from string columns. For example if they are separated by a '|':
In [41]: s = pd.Series(['a', 'a|b', np.nan, 'a|c']) In [42]: s.str.get_dummies(sep='|') Out[42]: a b c 0 1 0 0 1 1 1 0 2 0 0 0 3 1 0 1
See also get_dummies().
Method Summary¶
Method | Description |
---|---|
cat() | Concatenate strings |
split() | Split strings on delimiter |
rsplit() | Split strings on delimiter working from the end of the string |
get() | Index into each element (retrieve i-th element) |
join() | Join strings in each element of the Series with passed separator |
contains() | Return boolean array if each string contains pattern/regex |
replace() | Replace occurrences of pattern/regex with some other string |
repeat() | Duplicate values (s.str.repeat(3) equivalent to x * 3) |
pad() | Add whitespace to left, right, or both sides of strings |
center() | Equivalent to str.center |
ljust() | Equivalent to str.ljust |
rjust() | Equivalent to str.rjust |
zfill() | Equivalent to str.zfill |
wrap() | Split long strings into lines with length less than a given width |
slice() | Slice each string in the Series |
slice_replace() | Replace slice in each string with passed value |
count() | Count occurrences of pattern |
startswith() | Equivalent to str.startswith(pat) for each element |
endswith() | Equivalent to str.endswith(pat) for each element |
findall() | Compute list of all occurrences of pattern/regex for each string |
match() | Call re.match on each element, returning matched groups as list |
extract() | Call re.match on each element, as match does, but return matched groups as strings for convenience. |
len() | Compute string lengths |
strip() | Equivalent to str.strip |
rstrip() | Equivalent to str.rstrip |
lstrip() | Equivalent to str.lstrip |
partition() | Equivalent to str.partition |
rpartition() | Equivalent to str.rpartition |
lower() | Equivalent to str.lower |
upper() | Equivalent to str.upper |
find() | Equivalent to str.find |
rfind() | Equivalent to str.rfind |
index() | Equivalent to str.index |
rindex() | Equivalent to str.rindex |
capitalize() | Equivalent to str.capitalize |
swapcase() | Equivalent to str.swapcase |
normalize() | Return Unicode normal form. Equivalent to unicodedata.normalize |
translate() | Equivalent to str.translate |
isalnum() | Equivalent to str.isalnum |
isalpha() | Equivalent to str.isalpha |
isdigit() | Equivalent to str.isdigit |
isspace() | Equivalent to str.isspace |
islower() | Equivalent to str.islower |
isupper() | Equivalent to str.isupper |
istitle() | Equivalent to str.istitle |
isnumeric() | Equivalent to str.isnumeric |
isdecimal() | Equivalent to str.isdecimal |