Categorical Data¶
New in version 0.15.
Note
While there was in pandas.Categorical in earlier versions, the ability to use categorical data in Series and DataFrame is new.
This is a introduction to pandas categorical data type, including a short comparison with R’s factor.
Categoricals are a pandas data type, which correspond to categorical variables in statistics: a variable, which can take on only a limited, and usually fixed, number of possible values (categories; levels in R). Examples are gender, social class, blood types, country affiliations, observation time or ratings via Likert scales.
In contrast to statistical categorical variables, categorical data might have an order (e.g. ‘strongly agree’ vs ‘agree’ or ‘first observation’ vs. ‘second observation’), but numerical operations (additions, divisions, ...) are not possible.
All values of categorical data are either in categories or np.nan. Order is defined by the order of categories, not lexical order of the values. Internally, the data structure consists of a categories array and an integer array of codes which point to the real value in the categories array.
The categorical data type is useful in the following cases:
- A string variable consisting of only a few different values. Converting such a string variable to a categorical variable will save some memory, see here.
- The lexical order of a variable is not the same as the logical order (“one”, “two”, “three”). By converting to a categorical and specifying an order on the categories, sorting and min/max will use the logical order instead of the lexical order, see here.
- As a signal to other python libraries that this column should be treated as a categorical variable (e.g. to use suitable statistical methods or plot types).
See also the API docs on categoricals.
Object Creation¶
Categorical Series or columns in a DataFrame can be created in several ways:
By specifying dtype="category" when constructing a Series:
In [1]: s = Series(["a","b","c","a"], dtype="category")
In [2]: s
Out[2]:
0 a
1 b
2 c
3 a
dtype: category
Categories (3, object): [a, b, c]
By converting an existing Series or column to a category dtype:
In [3]: df = DataFrame({"A":["a","b","c","a"]})
In [4]: df["B"] = df["A"].astype('category')
In [5]: df
Out[5]:
A B
0 a a
1 b b
2 c c
3 a a
By using some special functions:
In [6]: df = DataFrame({'value': np.random.randint(0, 100, 20)})
In [7]: labels = [ "{0} - {1}".format(i, i + 9) for i in range(0, 100, 10) ]
In [8]: df['group'] = pd.cut(df.value, range(0, 105, 10), right=False, labels=labels)
In [9]: df.head(10)
Out[9]:
value group
0 65 60 - 69
1 49 40 - 49
2 56 50 - 59
3 43 40 - 49
4 43 40 - 49
5 91 90 - 99
6 32 30 - 39
7 87 80 - 89
8 36 30 - 39
9 8 0 - 9
See documentation for cut().
By passing a pandas.Categorical object to a Series or assigning it to a DataFrame.
In [10]: raw_cat = Categorical(["a","b","c","a"], categories=["b","c","d"],
....: ordered=False)
....:
In [11]: s = Series(raw_cat)
In [12]: s
Out[12]:
0 NaN
1 b
2 c
3 NaN
dtype: category
Categories (3, object): [b, c, d]
In [13]: df = DataFrame({"A":["a","b","c","a"]})
In [14]: df["B"] = raw_cat
In [15]: df
Out[15]:
A B
0 a NaN
1 b b
2 c c
3 a NaN
You can also specify differently ordered categories or make the resulting data ordered, by passing these arguments to astype():
In [16]: s = Series(["a","b","c","a"])
In [17]: s_cat = s.astype("category", categories=["b","c","d"], ordered=False)
In [18]: s_cat
Out[18]:
0 NaN
1 b
2 c
3 NaN
dtype: category
Categories (3, object): [b, c, d]
Categorical data has a specific category dtype:
In [19]: df.dtypes
Out[19]:
A object
B category
dtype: object
Note
In contrast to R’s factor function, categorical data is not converting input values to strings and categories will end up the same data type as the original values.
Note
In contrast to R’s factor function, there is currently no way to assign/change labels at creation time. Use categories to change the categories after creation time.
To get back to the original Series or numpy array, use Series.astype(original_dtype) or np.asarray(categorical):
In [20]: s = Series(["a","b","c","a"])
In [21]: s
Out[21]:
0 a
1 b
2 c
3 a
dtype: object
In [22]: s2 = s.astype('category')
In [23]: s2
Out[23]:
0 a
1 b
2 c
3 a
dtype: category
Categories (3, object): [a, b, c]
In [24]: s3 = s2.astype('string')
In [25]: s3
Out[25]:
0 a
1 b
2 c
3 a
dtype: object
In [26]: np.asarray(s2)
Out[26]: array(['a', 'b', 'c', 'a'], dtype=object)
If you have already codes and categories, you can use the from_codes() constructor to save the factorize step during normal constructor mode:
In [27]: splitter = np.random.choice([0,1], 5, p=[0.5,0.5])
In [28]: s = Series(Categorical.from_codes(splitter, categories=["train", "test"]))
Description¶
Using .describe() on categorical data will produce similar output to a Series or DataFrame of type string.
In [29]: cat = Categorical(["a","c","c",np.nan], categories=["b","a","c",np.nan] )
In [30]: df = DataFrame({"cat":cat, "s":["a","c","c",np.nan]})
In [31]: df.describe()
Out[31]:
cat s
count 3 3
unique 2 2
top c c
freq 2 2
In [32]: df["cat"].describe()
Out[32]:
count 3
unique 2
top c
freq 2
Name: cat, dtype: object
Working with categories¶
Categorical data has a categories and a ordered property, which list their possible values and whether the ordering matters or not. These properties are exposed as s.cat.categories and s.cat.ordered. If you don’t manually specify categories and ordering, they are inferred from the passed in values.
In [33]: s = Series(["a","b","c","a"], dtype="category")
In [34]: s.cat.categories
Out[34]: Index([u'a', u'b', u'c'], dtype='object')
In [35]: s.cat.ordered
Out[35]: False
It’s also possible to pass in the categories in a specific order:
In [36]: s = Series(Categorical(["a","b","c","a"], categories=["c","b","a"]))
In [37]: s.cat.categories
Out[37]: Index([u'c', u'b', u'a'], dtype='object')
In [38]: s.cat.ordered
Out[38]: False
Note
New categorical data are NOT automatically ordered. You must explicity pass ordered=True to indicate an ordered Categorical.
Renaming categories¶
Renaming categories is done by assigning new values to the Series.cat.categories property or by using the Categorical.rename_categories() method:
In [39]: s = Series(["a","b","c","a"], dtype="category")
In [40]: s
Out[40]:
0 a
1 b
2 c
3 a
dtype: category
Categories (3, object): [a, b, c]
In [41]: s.cat.categories = ["Group %s" % g for g in s.cat.categories]
In [42]: s
Out[42]:
0 Group a
1 Group b
2 Group c
3 Group a
dtype: category
Categories (3, object): [Group a, Group b, Group c]
In [43]: s.cat.rename_categories([1,2,3])
Out[43]:
0 1
1 2
2 3
3 1
dtype: category
Categories (3, int64): [1, 2, 3]
Note
In contrast to R’s factor, categorical data can have categories of other types than string.
Note
Be aware that assigning new categories is an inplace operations, while most other operation under Series.cat per default return a new Series of dtype category.
Categories must be unique or a ValueError is raised:
In [44]: try:
....: s.cat.categories = [1,1,1]
....: except ValueError as e:
....: print("ValueError: " + str(e))
....:
ValueError: Categorical categories must be unique
Appending new categories¶
Appending categories can be done by using the Categorical.add_categories() method:
In [45]: s = s.cat.add_categories([4])
In [46]: s.cat.categories
Out[46]: Index([u'Group a', u'Group b', u'Group c', 4], dtype='object')
In [47]: s
Out[47]:
0 Group a
1 Group b
2 Group c
3 Group a
dtype: category
Categories (4, object): [Group a, Group b, Group c, 4]
Removing categories¶
Removing categories can be done by using the Categorical.remove_categories() method. Values which are removed are replaced by np.nan.:
In [48]: s = s.cat.remove_categories([4])
In [49]: s
Out[49]:
0 Group a
1 Group b
2 Group c
3 Group a
dtype: category
Categories (3, object): [Group a, Group b, Group c]
Removing unused categories¶
Removing unused categories can also be done:
In [50]: s = Series(Categorical(["a","b","a"], categories=["a","b","c","d"]))
In [51]: s
Out[51]:
0 a
1 b
2 a
dtype: category
Categories (4, object): [a, b, c, d]
In [52]: s.cat.remove_unused_categories()
Out[52]:
0 a
1 b
2 a
dtype: category
Categories (2, object): [a, b]
Setting categories¶
If you want to do remove and add new categories in one step (which has some speed advantage), or simply set the categories to a predefined scale, use Categorical.set_categories().
In [53]: s = Series(["one","two","four", "-"], dtype="category")
In [54]: s
Out[54]:
0 one
1 two
2 four
3 -
dtype: category
Categories (4, object): [-, four, one, two]
In [55]: s = s.cat.set_categories(["one","two","three","four"])
In [56]: s
Out[56]:
0 one
1 two
2 four
3 NaN
dtype: category
Categories (4, object): [one, two, three, four]
Note
Be aware that Categorical.set_categories() cannot know whether some category is omitted intentionally or because it is misspelled or (under Python3) due to a type difference (e.g., numpys S1 dtype and python strings). This can result in surprising behaviour!
Sorting and Order¶
Warning
The default for construction has change in v0.16.0 to ordered=False, from the prior implicit ordered=True
If categorical data is ordered (s.cat.ordered == True), then the order of the categories has a meaning and certain operations are possible. If the categorical is unordered, .min()/.max() will raise a TypeError.
In [57]: s = Series(Categorical(["a","b","c","a"], ordered=False))
In [58]: s.sort()
In [59]: s = Series(["a","b","c","a"]).astype('category', ordered=True)
In [60]: s.sort()
In [61]: s
Out[61]:
0 a
3 a
1 b
2 c
dtype: category
Categories (3, object): [a < b < c]
In [62]: s.min(), s.max()
Out[62]: ('a', 'c')
You can set categorical data to be ordered by using as_ordered() or unordered by using as_unordered(). These will by default return a new object.
In [63]: s.cat.as_ordered()
Out[63]:
0 a
3 a
1 b
2 c
dtype: category
Categories (3, object): [a < b < c]
In [64]: s.cat.as_unordered()
Out[64]:
0 a
3 a
1 b
2 c
dtype: category
Categories (3, object): [a, b, c]
Sorting will use the order defined by categories, not any lexical order present on the data type. This is even true for strings and numeric data:
In [65]: s = Series([1,2,3,1], dtype="category")
In [66]: s = s.cat.set_categories([2,3,1], ordered=True)
In [67]: s
Out[67]:
0 1
1 2
2 3
3 1
dtype: category
Categories (3, int64): [2 < 3 < 1]
In [68]: s.sort()
In [69]: s
Out[69]:
1 2
2 3
0 1
3 1
dtype: category
Categories (3, int64): [2 < 3 < 1]
In [70]: s.min(), s.max()
Out[70]: (2, 1)
Reordering¶
Reordering the categories is possible via the Categorical.reorder_categories() and the Categorical.set_categories() methods. For Categorical.reorder_categories(), all old categories must be included in the new categories and no new categories are allowed. This will necessarily make the sort order the same as the categories order.
In [71]: s = Series([1,2,3,1], dtype="category")
In [72]: s = s.cat.reorder_categories([2,3,1], ordered=True)
In [73]: s
Out[73]:
0 1
1 2
2 3
3 1
dtype: category
Categories (3, int64): [2 < 3 < 1]
In [74]: s.sort()
In [75]: s
Out[75]:
1 2
2 3
0 1
3 1
dtype: category
Categories (3, int64): [2 < 3 < 1]
In [76]: s.min(), s.max()
Out[76]: (2, 1)
Note
Note the difference between assigning new categories and reordering the categories: the first renames categories and therefore the individual values in the Series, but if the first position was sorted last, the renamed value will still be sorted last. Reordering means that the way values are sorted is different afterwards, but not that individual values in the Series are changed.
Note
If the Categorical is not ordered, Series.min() and Series.max() will raise TypeError. Numeric operations like +, -, *, / and operations based on them (e.g.``Series.median()``, which would need to compute the mean between two values if the length of an array is even) do not work and raise a TypeError.
Multi Column Sorting¶
A categorical dtyped column will partcipate in a multi-column sort in a similar manner to other columns. The ordering of the categorical is determined by the categories of that columns.
In [77]: dfs = DataFrame({'A' : Categorical(list('bbeebbaa'),categories=['e','a','b'],ordered=True),
....: 'B' : [1,2,1,2,2,1,2,1] })
....:
In [78]: dfs.sort(['A','B'])
Out[78]:
A B
2 e 1
3 e 2
7 a 1
6 a 2
0 b 1
5 b 1
1 b 2
4 b 2
Reordering the categories, changes a future sort.
In [79]: dfs['A'] = dfs['A'].cat.reorder_categories(['a','b','e'])
In [80]: dfs.sort(['A','B'])
Out[80]:
A B
7 a 1
6 a 2
0 b 1
5 b 1
1 b 2
4 b 2
2 e 1
3 e 2
Comparisons¶
Comparing categorical data with other objects is possible in three cases:
- comparing equality (== and !=) to a list-like object (list, Series, array, ...) of the same length as the categorical data.
- all comparisons (==, !=, >, >=, <, and <=) of categorical data to another categorical Series, when ordered==True and the categories are the same.
- all comparisons of a categorical data to a scalar.
All other comparisons, especially “non-equality” comparisons of two categoricals with different categories or a categorical with any list-like object, will raise a TypeError.
Note
Any “non-equality” comparisons of categorical data with a Series, np.array, list or categorical data with different categories or ordering will raise an TypeError because custom categories ordering could be interpreted in two ways: one with taking in account the ordering and one without.
In [81]: cat = Series([1,2,3]).astype("category", categories=[3,2,1], ordered=True)
In [82]: cat_base = Series([2,2,2]).astype("category", categories=[3,2,1], ordered=True)
In [83]: cat_base2 = Series([2,2,2]).astype("category", ordered=True)
In [84]: cat
Out[84]:
0 1
1 2
2 3
dtype: category
Categories (3, int64): [3 < 2 < 1]
In [85]: cat_base
Out[85]:
0 2
1 2
2 2
dtype: category
Categories (3, int64): [3 < 2 < 1]
In [86]: cat_base2
Out[86]:
0 2
1 2
2 2
dtype: category
Categories (1, int64): [2]
Comparing to a categorical with the same categories and ordering or to a scalar works:
In [87]: cat > cat_base
Out[87]:
0 True
1 False
2 False
dtype: bool
In [88]: cat > 2
Out[88]:
0 False
1 False
2 True
dtype: bool
Equality comparisons work with any list-like object of same length and scalars:
In [89]: cat == cat_base
Out[89]:
0 False
1 True
2 False
dtype: bool
In [90]: cat == np.array([1,2,3])
Out[90]:
0 True
1 True
2 True
dtype: bool
In [91]: cat == 2
Out[91]:
0 False
1 True
2 False
dtype: bool
This doesn’t work because the categories are not the same:
In [92]: try:
....: cat > cat_base2
....: except TypeError as e:
....: print("TypeError: " + str(e))
....:
TypeError: Categoricals can only be compared if 'categories' are the same
If you want to do a “non-equality” comparison of a categorical series with a list-like object which is not categorical data, you need to be explicit and convert the categorical data back to the original values:
In [93]: base = np.array([1,2,3])
In [94]: try:
....: cat > base
....: except TypeError as e:
....: print("TypeError: " + str(e))
....:
TypeError: Cannot compare a Categorical for op __gt__ with type <type 'numpy.ndarray'>. If you want to
compare values, use 'np.asarray(cat) <op> other'.
In [95]: np.asarray(cat) > base
Out[95]: array([False, False, False], dtype=bool)
Operations¶
Apart from Series.min(), Series.max() and Series.mode(), the following operations are possible with categorical data:
Series methods like Series.value_counts() will use all categories, even if some categories are not present in the data:
In [96]: s = Series(Categorical(["a","b","c","c"], categories=["c","a","b","d"]))
In [97]: s.value_counts()
Out[97]:
c 2
b 1
a 1
d 0
dtype: int64
Groupby will also show “unused” categories:
In [98]: cats = Categorical(["a","b","b","b","c","c","c"], categories=["a","b","c","d"])
In [99]: df = DataFrame({"cats":cats,"values":[1,2,2,2,3,4,5]})
In [100]: df.groupby("cats").mean()
Out[100]:
values
cats
a 1
b 2
c 4
d NaN
In [101]: cats2 = Categorical(["a","a","b","b"], categories=["a","b","c"])
In [102]: df2 = DataFrame({"cats":cats2,"B":["c","d","c","d"], "values":[1,2,3,4]})
In [103]: df2.groupby(["cats","B"]).mean()
Out[103]:
values
cats B
a c 1
d 2
b c 3
d 4
c c NaN
d NaN
Pivot tables:
In [104]: raw_cat = Categorical(["a","a","b","b"], categories=["a","b","c"])
In [105]: df = DataFrame({"A":raw_cat,"B":["c","d","c","d"], "values":[1,2,3,4]})
In [106]: pd.pivot_table(df, values='values', index=['A', 'B'])
Out[106]:
A B
a c 1
d 2
b c 3
d 4
c c NaN
d NaN
Name: values, dtype: float64
Data munging¶
The optimized pandas data access methods .loc, .iloc, .ix .at, and .iat, work as normal, the only difference is the return type (for getting) and that only values already in categories can be assigned.
Getting¶
If the slicing operation returns either a DataFrame or a column of type Series, the category dtype is preserved.
In [107]: idx = Index(["h","i","j","k","l","m","n",])
In [108]: cats = Series(["a","b","b","b","c","c","c"], dtype="category", index=idx)
In [109]: values= [1,2,2,2,3,4,5]
In [110]: df = DataFrame({"cats":cats,"values":values}, index=idx)
In [111]: df.iloc[2:4,:]
Out[111]:
cats values
j b 2
k b 2
In [112]: df.iloc[2:4,:].dtypes
Out[112]:
cats category
values int64
dtype: object
In [113]: df.loc["h":"j","cats"]
Out[113]:
h a
i b
j b
Name: cats, dtype: category
Categories (3, object): [a, b, c]
In [114]: df.ix["h":"j",0:1]
Out[114]:
cats
h a
i b
j b
In [115]: df[df["cats"] == "b"]
Out[115]:
cats values
i b 2
j b 2
k b 2
An example where the category type is not preserved is if you take one single row: the resulting Series is of dtype object:
# get the complete "h" row as a Series
In [116]: df.loc["h", :]
Out[116]:
cats a
values 1
Name: h, dtype: object
Returning a single item from categorical data will also return the value, not a categorical of length “1”.
In [117]: df.iat[0,0]
Out[117]: 'a'
In [118]: df["cats"].cat.categories = ["x","y","z"]
In [119]: df.at["h","cats"] # returns a string
Out[119]: 'x'
Note
This is a difference to R’s factor function, where factor(c(1,2,3))[1] returns a single value factor.
To get a single value Series of type category pass in a list with a single value:
In [120]: df.loc[["h"],"cats"]
Out[120]:
h x
Name: cats, dtype: category
Categories (3, object): [x, y, z]
Setting¶
Setting values in a categorical column (or Series) works as long as the value is included in the categories:
In [121]: idx = Index(["h","i","j","k","l","m","n"])
In [122]: cats = Categorical(["a","a","a","a","a","a","a"], categories=["a","b"])
In [123]: values = [1,1,1,1,1,1,1]
In [124]: df = DataFrame({"cats":cats,"values":values}, index=idx)
In [125]: df.iloc[2:4,:] = [["b",2],["b",2]]
In [126]: df
Out[126]:
cats values
h a 1
i a 1
j b 2
k b 2
l a 1
m a 1
n a 1
In [127]: try:
.....: df.iloc[2:4,:] = [["c",3],["c",3]]
.....: except ValueError as e:
.....: print("ValueError: " + str(e))
.....:
ValueError: cannot setitem on a Categorical with a new category, set the categories first
Setting values by assigning categorical data will also check that the categories match:
In [128]: df.loc["j":"k","cats"] = Categorical(["a","a"], categories=["a","b"])
In [129]: df
Out[129]:
cats values
h a 1
i a 1
j a 2
k a 2
l a 1
m a 1
n a 1
In [130]: try:
.....: df.loc["j":"k","cats"] = Categorical(["b","b"], categories=["a","b","c"])
.....: except ValueError as e:
.....: print("ValueError: " + str(e))
.....:
ValueError: Cannot set a Categorical with another, without identical categories
Assigning a Categorical to parts of a column of other types will use the values:
In [131]: df = DataFrame({"a":[1,1,1,1,1], "b":["a","a","a","a","a"]})
In [132]: df.loc[1:2,"a"] = Categorical(["b","b"], categories=["a","b"])
In [133]: df.loc[2:3,"b"] = Categorical(["b","b"], categories=["a","b"])
In [134]: df
Out[134]:
a b
0 1 a
1 b a
2 b b
3 1 b
4 1 a
In [135]: df.dtypes
Out[135]:
a object
b object
dtype: object
Merging¶
You can concat two DataFrames containing categorical data together, but the categories of these categoricals need to be the same:
In [136]: cat = Series(["a","b"], dtype="category")
In [137]: vals = [1,2]
In [138]: df = DataFrame({"cats":cat, "vals":vals})
In [139]: res = pd.concat([df,df])
In [140]: res
Out[140]:
cats vals
0 a 1
1 b 2
0 a 1
1 b 2
In [141]: res.dtypes
Out[141]:
cats category
vals int64
dtype: object
In this case the categories are not the same and so an error is raised:
In [142]: df_different = df.copy()
In [143]: df_different["cats"].cat.categories = ["c","d"]
In [144]: try:
.....: pd.concat([df,df_different])
.....: except ValueError as e:
.....: print("ValueError: " + str(e))
.....:
ValueError: incompatible categories in categorical concat
The same applies to df.append(df_different).
Getting Data In/Out¶
New in version 0.15.2.
Writing data (Series, Frames) to a HDF store that contains a category dtype was implemented in 0.15.2. See here for an example and caveats.
Writing data to and reading data from Stata format files was implemented in 0.15.2. See here for an example and caveats.
Writing to a CSV file will convert the data, effectively removing any information about the categorical (categories and ordering). So if you read back the CSV file you have to convert the relevant columns back to category and assign the right categories and categories ordering.
In [145]: s = Series(Categorical(['a', 'b', 'b', 'a', 'a', 'd']))
# rename the categories
In [146]: s.cat.categories = ["very good", "good", "bad"]
# reorder the categories and add missing categories
In [147]: s = s.cat.set_categories(["very bad", "bad", "medium", "good", "very good"])
In [148]: df = DataFrame({"cats":s, "vals":[1,2,3,4,5,6]})
In [149]: csv = StringIO()
In [150]: df.to_csv(csv)
In [151]: df2 = pd.read_csv(StringIO(csv.getvalue()))
In [152]: df2.dtypes
Out[152]:
Unnamed: 0 int64
cats object
vals int64
dtype: object
In [153]: df2["cats"]
Out[153]:
0 very good
1 good
2 good
3 very good
4 very good
5 bad
Name: cats, dtype: object
# Redo the category
In [154]: df2["cats"] = df2["cats"].astype("category")
In [155]: df2["cats"].cat.set_categories(["very bad", "bad", "medium", "good", "very good"],
.....: inplace=True)
.....:
In [156]: df2.dtypes
Out[156]:
Unnamed: 0 int64
cats category
vals int64
dtype: object
In [157]: df2["cats"]
Out[157]:
0 very good
1 good
2 good
3 very good
4 very good
5 bad
Name: cats, dtype: category
Categories (5, object): [very bad, bad, medium, good, very good]
The same holds for writing to a SQL database with to_sql.
Missing Data¶
pandas primarily uses the value np.nan to represent missing data. It is by default not included in computations. See the Missing Data section
There are two ways a np.nan can be represented in categorical data: either the value is not available (“missing value”) or np.nan is a valid category.
In [158]: s = Series(["a","b",np.nan,"a"], dtype="category")
# only two categories
In [159]: s
Out[159]:
0 a
1 b
2 NaN
3 a
dtype: category
Categories (2, object): [a, b]
In [160]: s2 = Series(["a","b","c","a"], dtype="category")
In [161]: s2.cat.categories = [1,2,np.nan]
# three categories, np.nan included
In [162]: s2
Out[162]:
0 1
1 2
2 NaN
3 1
dtype: category
Categories (3, object): [1, 2, NaN]
Note
As integer Series can’t include NaN, the categories were converted to object.
Note
Missing value methods like isnull and fillna will take both missing values as well as np.nan categories into account:
In [163]: c = Series(["a","b",np.nan], dtype="category")
In [164]: c.cat.set_categories(["a","b",np.nan], inplace=True)
# will be inserted as a NA category:
In [165]: c[0] = np.nan
In [166]: s = Series(c)
In [167]: s
Out[167]:
0 NaN
1 b
2 NaN
dtype: category
Categories (3, object): [a, b, NaN]
In [168]: pd.isnull(s)
Out[168]:
0 True
1 False
2 True
dtype: bool
In [169]: s.fillna("a")
Out[169]:
0 a
1 b
2 a
dtype: category
Categories (3, object): [a, b, NaN]
Differences to R’s factor¶
The following differences to R’s factor functions can be observed:
- R’s levels are named categories
- R’s levels are always of type string, while categories in pandas can be of any dtype.
- It’s not possible to specify labels at creation time. Use s.cat.rename_categories(new_labels) afterwards.
- In contrast to R’s factor function, using categorical data as the sole input to create a new categorical series will not remove unused categories but create a new categorical series which is equal to the passed in one!
Gotchas¶
Memory Usage¶
The memory usage of a Categorical is proportional to the number of categories times the length of the data. In contrast, an object dtype is a constant times the length of the data.
In [170]: s = Series(['foo','bar']*1000)
# object dtype
In [171]: s.nbytes
Out[171]: 8000
# category dtype
In [172]: s.astype('category').nbytes
Out[172]: 2008
Note
If the number of categories approaches the length of the data, the Categorical will use nearly (or more) memory than an equivalent object dtype representation.
In [173]: s = Series(['foo%04d' % i for i in range(2000)])
# object dtype
In [174]: s.nbytes
Out[174]: 8000
# category dtype
In [175]: s.astype('category').nbytes
Out[175]: 12000
Old style constructor usage¶
In earlier versions than pandas 0.15, a Categorical could be constructed by passing in precomputed codes (called then labels) instead of values with categories. The codes were interpreted as pointers to the categories with -1 as NaN. This type of constructor useage is replaced by the special constructor Categorical.from_codes().
Unfortunately, in some special cases, using code which assumes the old style constructor usage will work with the current pandas version, resulting in subtle bugs:
>>> cat = Categorical([1,2], [1,2,3])
>>> # old version
>>> cat.get_values()
array([2, 3], dtype=int64)
>>> # new version
>>> cat.get_values()
array([1, 2], dtype=int64)
Warning
If you used Categoricals with older versions of pandas, please audit your code before upgrading and change your code to use the from_codes() constructor.
Categorical is not a numpy array¶
Currently, categorical data and the underlying Categorical is implemented as a python object and not as a low-level numpy array dtype. This leads to some problems.
numpy itself doesn’t know about the new dtype:
In [176]: try:
.....: np.dtype("category")
.....: except TypeError as e:
.....: print("TypeError: " + str(e))
.....:
TypeError: data type "category" not understood
In [177]: dtype = Categorical(["a"]).dtype
In [178]: try:
.....: np.dtype(dtype)
.....: except TypeError as e:
.....: print("TypeError: " + str(e))
.....:
TypeError: data type not understood
Dtype comparisons work:
In [179]: dtype == np.str_
Out[179]: False
In [180]: np.str_ == dtype
Out[180]: False
To check if a Series contains Categorical data, with pandas 0.16 or later, use hasattr(s, 'cat'):
In [181]: hasattr(Series(['a'], dtype='category'), 'cat')
Out[181]: True
In [182]: hasattr(Series(['a']), 'cat')
Out[182]: False
Using numpy functions on a Series of type category should not work as Categoricals are not numeric data (even in the case that .categories is numeric).
In [183]: s = Series(Categorical([1,2,3,4]))
In [184]: try:
.....: np.sum(s)
.....: except TypeError as e:
.....: print("TypeError: " + str(e))
.....:
TypeError: Categorical cannot perform the operation sum
Note
If such a function works, please file a bug at https://github.com/pydata/pandas!
dtype in apply¶
Pandas currently does not preserve the dtype in apply functions: If you apply along rows you get a Series of object dtype (same as getting a row -> getting one element will return a basic type) and applying along columns will also convert to object.
In [185]: df = DataFrame({"a":[1,2,3,4],
.....: "b":["a","b","c","d"],
.....: "cats":Categorical([1,2,3,2])})
.....:
In [186]: df.apply(lambda row: type(row["cats"]), axis=1)
Out[186]:
0 <type 'long'>
1 <type 'long'>
2 <type 'long'>
3 <type 'long'>
dtype: object
In [187]: df.apply(lambda col: col.dtype, axis=0)
Out[187]:
a object
b object
cats object
dtype: object
No Categorical Index¶
There is currently no index of type category, so setting the index to categorical column will convert the categorical data to a “normal” dtype first and therefore remove any custom ordering of the categories:
In [188]: cats = Categorical([1,2,3,4], categories=[4,2,3,1])
In [189]: strings = ["a","b","c","d"]
In [190]: values = [4,2,3,1]
In [191]: df = DataFrame({"strings":strings, "values":values}, index=cats)
In [192]: df.index
Out[192]: Int64Index([1, 2, 3, 4], dtype='int64')
# This should sort by categories but does not as there is no CategoricalIndex!
In [193]: df.sort_index()
Out[193]:
strings values
1 a 4
2 b 2
3 c 3
4 d 1
Note
This could change if a CategoricalIndex is implemented (see https://github.com/pydata/pandas/issues/7629)
Side Effects¶
Constructing a Series from a Categorical will not copy the input Categorical. This means that changes to the Series will in most cases change the original Categorical:
In [194]: cat = Categorical([1,2,3,10], categories=[1,2,3,4,10])
In [195]: s = Series(cat, name="cat")
In [196]: cat
Out[196]:
[1, 2, 3, 10]
Categories (5, int64): [1, 2, 3, 4, 10]
In [197]: s.iloc[0:2] = 10
In [198]: cat
Out[198]:
[10, 10, 3, 10]
Categories (5, int64): [1, 2, 3, 4, 10]
In [199]: df = DataFrame(s)
In [200]: df["cat"].cat.categories = [1,2,3,4,5]
In [201]: cat
Out[201]:
[5, 5, 3, 5]
Categories (5, int64): [1, 2, 3, 4, 5]
Use copy=True to prevent such a behaviour or simply don’t reuse Categoricals:
In [202]: cat = Categorical([1,2,3,10], categories=[1,2,3,4,10])
In [203]: s = Series(cat, name="cat", copy=True)
In [204]: cat
Out[204]:
[1, 2, 3, 10]
Categories (5, int64): [1, 2, 3, 4, 10]
In [205]: s.iloc[0:2] = 10
In [206]: cat
Out[206]:
[1, 2, 3, 10]
Categories (5, int64): [1, 2, 3, 4, 10]
Note
This also happens in some cases when you supply a numpy array instead of a Categorical: using an int array (e.g. np.array([1,2,3,4])) will exhibit the same behaviour, while using a string array (e.g. np.array(["a","b","c","a"])) will not.