Comparison with R / R libraries¶
Since pandas aims to provide a lot of the data manipulation and analysis functionality that people use R for, this page was started to provide a more detailed look at the R language and its many third party libraries as they relate to pandas. In comparisons with R and CRAN libraries, we care about the following things:
- Functionality / flexibility: what can/cannot be done with each tool
- Performance: how fast are operations. Hard numbers/benchmarks are preferable
- Ease-of-use: Is one tool easier/harder to use (you may have to be the judge of this, given side-by-side code comparisons)
This page is also here to offer a bit of a translation guide for users of these R packages.
Base R¶
Slicing with R’s c¶
R makes it easy to access data.frame columns by name
df <- data.frame(a=rnorm(5), b=rnorm(5), c=rnorm(5), d=rnorm(5), e=rnorm(5))
df[, c("a", "c", "e")]
or by integer location
df <- data.frame(matrix(rnorm(1000), ncol=100))
df[, c(1:10, 25:30, 40, 50:100)]
Selecting multiple columns by name in pandas is straightforward
In [1]: df = DataFrame(np.random.randn(10, 3), columns=list('abc'))
In [2]: df[['a', 'c']]
Out[2]:
a c
0 0.026731 0.007160
1 -0.668032 0.454170
2 -0.287102 0.099903
3 2.486931 1.425037
4 -0.209642 -0.327804
5 0.713264 -1.285616
6 -0.483850 -0.838260
7 0.673194 0.443537
8 -0.017685 1.556201
9 -1.717420 1.201081
In [3]: df.loc[:, ['a', 'c']]
Out[3]:
a c
0 0.026731 0.007160
1 -0.668032 0.454170
2 -0.287102 0.099903
3 2.486931 1.425037
4 -0.209642 -0.327804
5 0.713264 -1.285616
6 -0.483850 -0.838260
7 0.673194 0.443537
8 -0.017685 1.556201
9 -1.717420 1.201081
Selecting multiple noncontiguous columns by integer location can be achieved with a combination of the iloc indexer attribute and numpy.r_.
In [4]: named = list('abcdefg')
In [5]: n = 30
In [6]: columns = named + np.arange(len(named), n).tolist()
In [7]: df = DataFrame(np.random.randn(n, n), columns=columns)
In [8]: df.iloc[:, np.r_[:10, 24:30]]
Out[8]:
a b c d e f g \
0 0.980077 -0.026234 -0.170669 0.013884 0.027505 0.703105 0.019201
1 -0.854995 0.514409 -1.659617 -0.430681 -0.158656 -0.085703 0.506384
2 0.169303 0.127031 0.272596 0.604549 -2.485841 1.008586 0.181870
3 2.493078 -1.483518 -2.641568 0.667512 1.527341 -1.347010 -2.132914
4 -1.527202 0.680366 -0.373935 -2.253353 -1.046336 -0.209946 0.655528
5 0.182293 0.166052 1.008944 0.979035 -0.525153 0.442851 0.081081
6 -0.620492 1.395629 0.306483 0.433323 -0.522057 -0.808639 -0.989135
.. ... ... ... ... ... ... ...
23 -0.759966 1.525081 0.251470 -0.137568 -0.556620 0.439958 1.006958
24 0.756512 -1.622032 -0.468504 -0.656569 1.187661 0.918965 -0.926170
25 -0.285695 -1.094918 0.782185 0.026271 -0.671403 -2.323504 0.923315
26 0.747723 -0.532274 -1.982213 0.189767 0.952486 -0.629776 -0.400384
27 1.788288 -0.425494 1.867720 0.456395 -0.673380 -0.866479 -0.195216
28 -0.564100 -1.675530 1.351386 -0.415076 1.449936 -1.204615 0.718270
29 -0.403887 -0.559663 -1.589124 1.188404 0.137041 0.623148 -1.060652
7 8 9 24 25 26 27 \
0 -0.986426 1.373747 -0.380912 -0.078156 -1.679855 -0.912033 0.895000
1 1.153899 0.286916 -0.177636 -0.119904 -0.242666 -0.153091 0.571129
2 0.267965 -0.418870 -0.671424 1.444209 0.442832 -1.344020 -0.497400
3 0.779800 0.972978 -0.198472 0.583219 0.939187 -2.739487 -0.573693
4 -0.481400 -0.491840 1.895771 1.346323 0.398306 -1.886066 -0.488051
5 -0.857154 0.885747 2.136484 0.923272 -1.621497 -1.229428 0.340857
6 0.612186 0.403547 -0.352382 -0.704276 -0.201909 -0.036832 -2.269440
.. ... ... ... ... ... ... ...
23 0.092489 -0.397967 -0.758999 -1.829344 1.093811 -1.589366 -0.402728
24 1.539389 -0.471634 0.178793 0.283814 -0.634240 0.286129 0.835316
25 -1.473366 1.437720 0.874341 -0.327307 -0.235821 -1.058125 -1.137497
26 0.692474 -0.471878 1.351723 -0.346066 -0.056739 -1.497562 0.697053
27 -0.848694 1.359850 -1.043894 0.520447 0.951542 1.199311 0.361042
28 -0.431669 -0.451198 0.724921 0.290519 -0.210387 -0.236078 1.614966
29 1.281611 1.356417 -1.105205 -1.151327 -0.869907 -0.904973 0.046126
28 29
0 0.759727 1.053398
1 1.049663 -0.200188
2 -1.255580 -0.000235
3 -1.233017 -0.803782
4 1.022238 -1.097735
5 -0.240188 -0.640714
6 0.358983 -0.582609
.. ... ...
23 0.333326 -1.036511
24 0.826629 -0.735065
25 1.768844 1.973471
26 1.246539 -0.369645
27 -1.004705 0.124648
28 0.897880 -0.840843
29 0.046003 1.314437
[30 rows x 16 columns]
aggregate¶
In R you may want to split data into subsets and compute the mean for each. Using a data.frame called df and splitting it into groups by1 and by2:
df <- data.frame(
v1 = c(1,3,5,7,8,3,5,NA,4,5,7,9),
v2 = c(11,33,55,77,88,33,55,NA,44,55,77,99),
by1 = c("red", "blue", 1, 2, NA, "big", 1, 2, "red", 1, NA, 12),
by2 = c("wet", "dry", 99, 95, NA, "damp", 95, 99, "red", 99, NA, NA))
aggregate(x=df[, c("v1", "v2")], by=list(mydf2$by1, mydf2$by2), FUN = mean)
The groupby() method is similar to base R aggregate function.
In [9]: from pandas import DataFrame
In [10]: df = DataFrame({
....: 'v1': [1,3,5,7,8,3,5,np.nan,4,5,7,9],
....: 'v2': [11,33,55,77,88,33,55,np.nan,44,55,77,99],
....: 'by1': ["red", "blue", 1, 2, np.nan, "big", 1, 2, "red", 1, np.nan, 12],
....: 'by2': ["wet", "dry", 99, 95, np.nan, "damp", 95, 99, "red", 99, np.nan,
....: np.nan]
....: })
....:
In [11]: g = df.groupby(['by1','by2'])
In [12]: g[['v1','v2']].mean()
Out[12]:
v1 v2
by1 by2
1 95 5 55
99 5 55
2 95 7 77
99 NaN NaN
big damp 3 33
blue dry 3 33
red red 4 44
wet 1 11
For more details and examples see the groupby documentation.
match / %in%¶
A common way to select data in R is using %in% which is defined using the function match. The operator %in% is used to return a logical vector indicating if there is a match or not:
s <- 0:4
s %in% c(2,4)
The isin() method is similar to R %in% operator:
In [13]: s = pd.Series(np.arange(5),dtype=np.float32)
In [14]: s.isin([2, 4])
Out[14]:
0 False
1 False
2 True
3 False
4 True
dtype: bool
The match function returns a vector of the positions of matches of its first argument in its second:
s <- 0:4
match(s, c(2,4))
The apply() method can be used to replicate this:
In [15]: s = pd.Series(np.arange(5),dtype=np.float32)
In [16]: pd.Series(pd.match(s,[2,4],np.nan))
Out[16]:
0 NaN
1 NaN
2 0
3 NaN
4 1
dtype: float64
For more details and examples see the reshaping documentation.
tapply¶
tapply is similar to aggregate, but data can be in a ragged array, since the subclass sizes are possibly irregular. Using a data.frame called baseball, and retrieving information based on the array team:
baseball <-
data.frame(team = gl(5, 5,
labels = paste("Team", LETTERS[1:5])),
player = sample(letters, 25),
batting.average = runif(25, .200, .400))
tapply(baseball$batting.average, baseball.example$team,
max)
In pandas we may use pivot_table() method to handle this:
In [17]: import random
In [18]: import string
In [19]: baseball = DataFrame({
....: 'team': ["team %d" % (x+1) for x in range(5)]*5,
....: 'player': random.sample(list(string.ascii_lowercase),25),
....: 'batting avg': np.random.uniform(.200, .400, 25)
....: })
....:
In [20]: baseball.pivot_table(values='batting avg', columns='team', aggfunc=np.max)
Out[20]:
team
team 1 0.398633
team 2 0.387240
team 3 0.383183
team 4 0.358502
team 5 0.351390
Name: batting avg, dtype: float64
For more details and examples see the reshaping documentation.
subset¶
New in version 0.13.
The query() method is similar to the base R subset function. In R you might want to get the rows of a data.frame where one column’s values are less than another column’s values:
df <- data.frame(a=rnorm(10), b=rnorm(10))
subset(df, a <= b)
df[df$a <= df$b,] # note the comma
In pandas, there are a few ways to perform subsetting. You can use query() or pass an expression as if it were an index/slice as well as standard boolean indexing:
In [21]: df = DataFrame({'a': np.random.randn(10), 'b': np.random.randn(10)})
In [22]: df.query('a <= b')
Out[22]:
a b
3 -0.945780 -0.571590
4 -0.761339 -0.073684
5 0.118969 0.368197
8 0.476629 0.497200
In [23]: df[df.a <= df.b]
Out[23]:
a b
3 -0.945780 -0.571590
4 -0.761339 -0.073684
5 0.118969 0.368197
8 0.476629 0.497200
In [24]: df.loc[df.a <= df.b]
Out[24]:
a b
3 -0.945780 -0.571590
4 -0.761339 -0.073684
5 0.118969 0.368197
8 0.476629 0.497200
For more details and examples see the query documentation.
with¶
New in version 0.13.
An expression using a data.frame called df in R with the columns a and b would be evaluated using with like so:
df <- data.frame(a=rnorm(10), b=rnorm(10))
with(df, a + b)
df$a + df$b # same as the previous expression
In pandas the equivalent expression, using the eval() method, would be:
In [25]: df = DataFrame({'a': np.random.randn(10), 'b': np.random.randn(10)})
In [26]: df.eval('a + b')
Out[26]:
0 -0.445539
1 1.199896
2 -0.524619
3 -0.657901
4 2.069409
5 -1.131762
6 -0.609297
7 1.887267
8 -0.363708
9 0.607658
dtype: float64
In [27]: df.a + df.b # same as the previous expression
Out[27]:
0 -0.445539
1 1.199896
2 -0.524619
3 -0.657901
4 2.069409
5 -1.131762
6 -0.609297
7 1.887267
8 -0.363708
9 0.607658
dtype: float64
In certain cases eval() will be much faster than evaluation in pure Python. For more details and examples see the eval documentation.
zoo¶
xts¶
plyr¶
plyr is an R library for the split-apply-combine strategy for data analysis. The functions revolve around three data structures in R, a for arrays, l for lists, and d for data.frame. The table below shows how these data structures could be mapped in Python.
R | Python |
---|---|
array | list |
lists | dictionary or list of objects |
data.frame | dataframe |
ddply¶
An expression using a data.frame called df in R where you want to summarize x by month:
require(plyr)
df <- data.frame(
x = runif(120, 1, 168),
y = runif(120, 7, 334),
z = runif(120, 1.7, 20.7),
month = rep(c(5,6,7,8),30),
week = sample(1:4, 120, TRUE)
)
ddply(df, .(month, week), summarize,
mean = round(mean(x), 2),
sd = round(sd(x), 2))
In pandas the equivalent expression, using the groupby() method, would be:
In [28]: df = DataFrame({
....: 'x': np.random.uniform(1., 168., 120),
....: 'y': np.random.uniform(7., 334., 120),
....: 'z': np.random.uniform(1.7, 20.7, 120),
....: 'month': [5,6,7,8]*30,
....: 'week': np.random.randint(1,4, 120)
....: })
....:
In [29]: grouped = df.groupby(['month','week'])
In [30]: print grouped['x'].agg([np.mean, np.std])
mean std
month week
5 1 95.845523 45.508323
2 68.248245 53.985272
3 77.946656 61.288967
6 1 87.323361 60.608255
2 81.003018 45.152067
3 94.093937 44.719095
7 1 73.603956 50.449598
2 88.169158 51.700743
3 92.333076 42.456759
8 1 48.294433 52.056095
2 96.222677 44.712622
3 74.014139 59.402094
For more details and examples see the groupby documentation.
reshape / reshape2¶
melt.array¶
An expression using a 3 dimensional array called a in R where you want to melt it into a data.frame:
a <- array(c(1:23, NA), c(2,3,4))
data.frame(melt(a))
In Python, since a is a list, you can simply use list comprehension.
In [31]: a = np.array(list(range(1,24))+[np.NAN]).reshape(2,3,4)
In [32]: DataFrame([tuple(list(x)+[val]) for x, val in np.ndenumerate(a)])
Out[32]:
0 1 2 3
0 0 0 0 1
1 0 0 1 2
2 0 0 2 3
3 0 0 3 4
4 0 1 0 5
5 0 1 1 6
6 0 1 2 7
.. .. .. .. ..
17 1 1 1 18
18 1 1 2 19
19 1 1 3 20
20 1 2 0 21
21 1 2 1 22
22 1 2 2 23
23 1 2 3 NaN
[24 rows x 4 columns]
melt.list¶
An expression using a list called a in R where you want to melt it into a data.frame:
a <- as.list(c(1:4, NA))
data.frame(melt(a))
In Python, this list would be a list of tuples, so DataFrame() method would convert it to a dataframe as required.
In [33]: a = list(enumerate(list(range(1,5))+[np.NAN]))
In [34]: DataFrame(a)
Out[34]:
0 1
0 0 1
1 1 2
2 2 3
3 3 4
4 4 NaN
For more details and examples see the Into to Data Structures documentation.
melt.data.frame¶
An expression using a data.frame called cheese in R where you want to reshape the data.frame:
cheese <- data.frame(
first = c('John', 'Mary'),
last = c('Doe', 'Bo'),
height = c(5.5, 6.0),
weight = c(130, 150)
)
melt(cheese, id=c("first", "last"))
In Python, the melt() method is the R equivalent:
In [35]: cheese = DataFrame({'first' : ['John', 'Mary'],
....: 'last' : ['Doe', 'Bo'],
....: 'height' : [5.5, 6.0],
....: 'weight' : [130, 150]})
....:
In [36]: pd.melt(cheese, id_vars=['first', 'last'])
Out[36]:
first last variable value
0 John Doe height 5.5
1 Mary Bo height 6.0
2 John Doe weight 130.0
3 Mary Bo weight 150.0
In [37]: cheese.set_index(['first', 'last']).stack() # alternative way
Out[37]:
first last
John Doe height 5.5
weight 130.0
Mary Bo height 6.0
weight 150.0
dtype: float64
For more details and examples see the reshaping documentation.
cast¶
In R acast is an expression using a data.frame called df in R to cast into a higher dimensional array:
df <- data.frame(
x = runif(12, 1, 168),
y = runif(12, 7, 334),
z = runif(12, 1.7, 20.7),
month = rep(c(5,6,7),4),
week = rep(c(1,2), 6)
)
mdf <- melt(df, id=c("month", "week"))
acast(mdf, week ~ month ~ variable, mean)
In Python the best way is to make use of pivot_table():
In [38]: df = DataFrame({
....: 'x': np.random.uniform(1., 168., 12),
....: 'y': np.random.uniform(7., 334., 12),
....: 'z': np.random.uniform(1.7, 20.7, 12),
....: 'month': [5,6,7]*4,
....: 'week': [1,2]*6
....: })
....:
In [39]: mdf = pd.melt(df, id_vars=['month', 'week'])
In [40]: pd.pivot_table(mdf, values='value', index=['variable','week'],
....: columns=['month'], aggfunc=np.mean)
....:
Out[40]:
month 5 6 7
variable week
x 1 65.901002 82.932542 73.523573
2 90.191210 72.900592 134.696336
y 1 179.812810 67.028316 277.117714
2 141.825889 133.754967 255.766830
z 1 14.988996 5.378343 9.533254
2 11.207930 19.030246 13.981908
Similarly for dcast which uses a data.frame called df in R to aggregate information based on Animal and FeedType:
df <- data.frame(
Animal = c('Animal1', 'Animal2', 'Animal3', 'Animal2', 'Animal1',
'Animal2', 'Animal3'),
FeedType = c('A', 'B', 'A', 'A', 'B', 'B', 'A'),
Amount = c(10, 7, 4, 2, 5, 6, 2)
)
dcast(df, Animal ~ FeedType, sum, fill=NaN)
# Alternative method using base R
with(df, tapply(Amount, list(Animal, FeedType), sum))
Python can approach this in two different ways. Firstly, similar to above using pivot_table():
In [41]: df = DataFrame({
....: 'Animal': ['Animal1', 'Animal2', 'Animal3', 'Animal2', 'Animal1',
....: 'Animal2', 'Animal3'],
....: 'FeedType': ['A', 'B', 'A', 'A', 'B', 'B', 'A'],
....: 'Amount': [10, 7, 4, 2, 5, 6, 2],
....: })
....:
In [42]: df.pivot_table(values='Amount', index='Animal', columns='FeedType', aggfunc='sum')
Out[42]:
FeedType A B
Animal
Animal1 10 5
Animal2 2 13
Animal3 6 NaN
The second approach is to use the groupby() method:
In [43]: df.groupby(['Animal','FeedType'])['Amount'].sum()
Out[43]:
Animal FeedType
Animal1 A 10
B 5
Animal2 A 2
B 13
Animal3 A 6
Name: Amount, dtype: int64
For more details and examples see the reshaping documentation or the groupby documentation.