Copy-on-Write (CoW)#
Copy-on-Write was first introduced in version 1.5.0. Starting from version 2.0 most of the optimizations that become possible through CoW are implemented and supported. All possible optimizations are supported starting from pandas 2.1.
We expect that CoW will be enabled by default in version 3.0.
CoW will lead to more predictable behavior since it is not possible to update more than one object with one statement, e.g. indexing operations or methods won’t have side-effects. Additionally, through delaying copies as long as possible, the average performance and memory usage will improve.
Previous behavior#
pandas indexing behavior is tricky to understand. Some operations return views while other return copies. Depending on the result of the operation, mutation one object might accidentally mutate another:
In [1]: df = pd.DataFrame({"foo": [1, 2, 3], "bar": [4, 5, 6]})
In [2]: subset = df["foo"]
In [3]: subset.iloc[0] = 100
In [4]: df
Out[4]:
foo bar
0 100 4
1 2 5
2 3 6
Mutating subset
, e.g. updating its values, also updates df
. The exact behavior is
hard to predict. Copy-on-Write solves accidentally modifying more than one object,
it explicitly disallows this. With CoW enabled, df
is unchanged:
In [5]: pd.options.mode.copy_on_write = True
In [6]: df = pd.DataFrame({"foo": [1, 2, 3], "bar": [4, 5, 6]})
In [7]: subset = df["foo"]
In [8]: subset.iloc[0] = 100
In [9]: df
Out[9]:
foo bar
0 1 4
1 2 5
2 3 6
The following sections will explain what this means and how it impacts existing applications.
Description#
CoW means that any DataFrame or Series derived from another in any way always behaves as a copy. As a consequence, we can only change the values of an object through modifying the object itself. CoW disallows updating a DataFrame or a Series that shares data with another DataFrame or Series object inplace.
This avoids side-effects when modifying values and hence, most methods can avoid actually copying the data and only trigger a copy when necessary.
The following example will operate inplace with CoW:
In [10]: df = pd.DataFrame({"foo": [1, 2, 3], "bar": [4, 5, 6]})
In [11]: df.iloc[0, 0] = 100
In [12]: df
Out[12]:
foo bar
0 100 4
1 2 5
2 3 6
The object df
does not share any data with any other object and hence no
copy is triggered when updating the values. In contrast, the following operation
triggers a copy of the data under CoW:
In [13]: df = pd.DataFrame({"foo": [1, 2, 3], "bar": [4, 5, 6]})
In [14]: df2 = df.reset_index(drop=True)
In [15]: df2.iloc[0, 0] = 100
In [16]: df
Out[16]:
foo bar
0 1 4
1 2 5
2 3 6
In [17]: df2
Out[17]:
foo bar
0 100 4
1 2 5
2 3 6
reset_index
returns a lazy copy with CoW while it copies the data without CoW.
Since both objects, df
and df2
share the same data, a copy is triggered
when modifying df2
. The object df
still has the same values as initially
while df2
was modified.
If the object df
isn’t needed anymore after performing the reset_index
operation,
you can emulate an inplace-like operation through assigning the output of reset_index
to the same variable:
In [18]: df = pd.DataFrame({"foo": [1, 2, 3], "bar": [4, 5, 6]})
In [19]: df = df.reset_index(drop=True)
In [20]: df.iloc[0, 0] = 100
In [21]: df
Out[21]:
foo bar
0 100 4
1 2 5
2 3 6
The initial object gets out of scope as soon as the result of reset_index
is
reassigned and hence df
does not share data with any other object. No copy
is necessary when modifying the object. This is generally true for all methods
listed in Copy-on-Write optimizations.
Previously, when operating on views, the view and the parent object was modified:
In [22]: with pd.option_context("mode.copy_on_write", False):
....: df = pd.DataFrame({"foo": [1, 2, 3], "bar": [4, 5, 6]})
....: view = df[:]
....: df.iloc[0, 0] = 100
....:
In [23]: df
Out[23]:
foo bar
0 100 4
1 2 5
2 3 6
In [24]: view
Out[24]:
foo bar
0 100 4
1 2 5
2 3 6
CoW triggers a copy when df
is changed to avoid mutating view
as well:
In [25]: df = pd.DataFrame({"foo": [1, 2, 3], "bar": [4, 5, 6]})
In [26]: view = df[:]
In [27]: df.iloc[0, 0] = 100
In [28]: df
Out[28]:
foo bar
0 100 4
1 2 5
2 3 6
In [29]: view
Out[29]:
foo bar
0 1 4
1 2 5
2 3 6
Chained Assignment#
Chained assignment references a technique where an object is updated through two subsequent indexing operations, e.g.
In [30]: with pd.option_context("mode.copy_on_write", False):
....: df = pd.DataFrame({"foo": [1, 2, 3], "bar": [4, 5, 6]})
....: df["foo"][df["bar"] > 5] = 100
....: df
....:
The column foo
is updated where the column bar
is greater than 5.
This violates the CoW principles though, because it would have to modify the
view df["foo"]
and df
in one step. Hence, chained assignment will
consistently never work and raise a ChainedAssignmentError
warning
with CoW enabled:
In [31]: df = pd.DataFrame({"foo": [1, 2, 3], "bar": [4, 5, 6]})
In [32]: df["foo"][df["bar"] > 5] = 100
With copy on write this can be done by using loc
.
In [33]: df.loc[df["bar"] > 5, "foo"] = 100
Read-only NumPy arrays#
Accessing the underlying NumPy array of a DataFrame will return a read-only array if the array shares data with the initial DataFrame:
The array is a copy if the initial DataFrame consists of more than one array:
In [34]: df = pd.DataFrame({"a": [1, 2], "b": [1.5, 2.5]})
In [35]: df.to_numpy()
Out[35]:
array([[1. , 1.5],
[2. , 2.5]])
The array shares data with the DataFrame if the DataFrame consists of only one NumPy array:
In [36]: df = pd.DataFrame({"a": [1, 2], "b": [3, 4]})
In [37]: df.to_numpy()
Out[37]:
array([[1, 3],
[2, 4]])
This array is read-only, which means that it can’t be modified inplace:
In [38]: arr = df.to_numpy()
In [39]: arr[0, 0] = 100
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
Cell In[39], line 1
----> 1 arr[0, 0] = 100
ValueError: assignment destination is read-only
The same holds true for a Series, since a Series always consists of a single array.
There are two potential solution to this:
Trigger a copy manually if you want to avoid updating DataFrames that share memory with your array.
Make the array writeable. This is a more performant solution but circumvents Copy-on-Write rules, so it should be used with caution.
In [40]: arr = df.to_numpy()
In [41]: arr.flags.writeable = True
In [42]: arr[0, 0] = 100
In [43]: arr
Out[43]:
array([[100, 3],
[ 2, 4]])
Patterns to avoid#
No defensive copy will be performed if two objects share the same data while you are modifying one object inplace.
In [44]: df = pd.DataFrame({"a": [1, 2, 3], "b": [4, 5, 6]})
In [45]: df2 = df.reset_index()
In [46]: df2.iloc[0, 0] = 100
This creates two objects that share data and thus the setitem operation will trigger a
copy. This is not necessary if the initial object df
isn’t needed anymore.
Simply reassigning to the same variable will invalidate the reference that is
held by the object.
In [47]: df = pd.DataFrame({"a": [1, 2, 3], "b": [4, 5, 6]})
In [48]: df = df.reset_index()
In [49]: df.iloc[0, 0] = 100
No copy is necessary in this example. Creating multiple references keeps unnecessary references alive and thus will hurt performance with Copy-on-Write.
Copy-on-Write optimizations#
A new lazy copy mechanism that defers the copy until the object in question is modified
and only if this object shares data with another object. This mechanism was added to
methods that don’t require a copy of the underlying data. Popular examples are DataFrame.drop()
for axis=1
and DataFrame.rename()
.
These methods return views when Copy-on-Write is enabled, which provides a significant performance improvement compared to the regular execution.
How to enable CoW#
Copy-on-Write can be enabled through the configuration option copy_on_write
. The option can
be turned on __globally__ through either of the following:
In [50]: pd.set_option("mode.copy_on_write", True)
In [51]: pd.options.mode.copy_on_write = True