pandas.core.groupby.SeriesGroupBy.value_counts#
- SeriesGroupBy.value_counts(normalize=False, sort=True, ascending=False, bins=None, dropna=True)[source]#
Return a Series or DataFrame containing counts of unique rows.
Added in version 1.4.0.
- Parameters:
- normalizebool, default False
Return proportions rather than frequencies.
- sortbool, default True
Sort by frequencies.
- ascendingbool, default False
Sort in ascending order.
- binsint or list of ints, optional
Rather than count values, group them into half-open bins, a convenience for pd.cut, only works with numeric data.
- dropnabool, default True
Don’t include counts of rows that contain NA values.
- Returns:
- Series or DataFrame
Series if the groupby
as_index
is True, otherwise DataFrame.
See also
Series.value_counts
Equivalent method on Series.
DataFrame.value_counts
Equivalent method on DataFrame.
DataFrameGroupBy.value_counts
Equivalent method on DataFrameGroupBy.
Notes
If the groupby
as_index
is True then the returned Series will have a MultiIndex with one level per input column.If the groupby
as_index
is False then the returned DataFrame will have an additional column with the value_counts. The column is labelled ‘count’ or ‘proportion’, depending on thenormalize
parameter.
By default, rows that contain any NA values are omitted from the result.
By default, the result will be in descending order so that the first element of each group is the most frequently-occurring row.
Examples
>>> s = pd.Series( ... [1, 1, 2, 3, 2, 3, 3, 1, 1, 3, 3, 3], ... index=["A", "A", "A", "A", "A", "A", "B", "B", "B", "B", "B", "B"], ... ) >>> s A 1 A 1 A 2 A 3 A 2 A 3 B 3 B 1 B 1 B 3 B 3 B 3 dtype: int64 >>> g1 = s.groupby(s.index) >>> g1.value_counts(bins=2) A (0.997, 2.0] 4 (2.0, 3.0] 2 B (2.0, 3.0] 4 (0.997, 2.0] 2 Name: count, dtype: int64 >>> g1.value_counts(normalize=True) A 1 0.333333 2 0.333333 3 0.333333 B 3 0.666667 1 0.333333 Name: proportion, dtype: float64