What’s new in 2.2.0 (January 19, 2024)#

These are the changes in pandas 2.2.0. See Release notes for a full changelog including other versions of pandas.

Upcoming changes in pandas 3.0#

pandas 3.0 will bring two bigger changes to the default behavior of pandas.

Copy-on-Write#

The currently optional mode Copy-on-Write will be enabled by default in pandas 3.0. There won’t be an option to keep the current behavior enabled. The new behavioral semantics are explained in the user guide about Copy-on-Write.

The new behavior can be enabled since pandas 2.0 with the following option:

pd.options.mode.copy_on_write = True

This change brings different changes in behavior in how pandas operates with respect to copies and views. Some of these changes allow a clear deprecation, like the changes in chained assignment. Other changes are more subtle and thus, the warnings are hidden behind an option that can be enabled in pandas 2.2.

pd.options.mode.copy_on_write = "warn"

This mode will warn in many different scenarios that aren’t actually relevant to most queries. We recommend exploring this mode, but it is not necessary to get rid of all of these warnings. The migration guide explains the upgrade process in more detail.

Dedicated string data type (backed by Arrow) by default#

Historically, pandas represented string columns with NumPy object data type. This representation has numerous problems, including slow performance and a large memory footprint. This will change in pandas 3.0. pandas will start inferring string columns as a new string data type, backed by Arrow, which represents strings contiguous in memory. This brings a huge performance and memory improvement.

Old behavior:

In [1]: ser = pd.Series(["a", "b"])
Out[1]:
0    a
1    b
dtype: object

New behavior:

In [1]: ser = pd.Series(["a", "b"])
Out[1]:
0    a
1    b
dtype: string

The string data type that is used in these scenarios will mostly behave as NumPy object would, including missing value semantics and general operations on these columns.

This change includes a few additional changes across the API:

  • Currently, specifying dtype="string" creates a dtype that is backed by Python strings which are stored in a NumPy array. This will change in pandas 3.0, this dtype will create an Arrow backed string column.

  • The column names and the Index will also be backed by Arrow strings.

  • PyArrow will become a required dependency with pandas 3.0 to accommodate this change.

This future dtype inference logic can be enabled with:

pd.options.future.infer_string = True

Enhancements#

ADBC Driver support in to_sql and read_sql#

read_sql() and to_sql() now work with Apache Arrow ADBC drivers. Compared to traditional drivers used via SQLAlchemy, ADBC drivers should provide significant performance improvements, better type support and cleaner nullability handling.

import adbc_driver_postgresql.dbapi as pg_dbapi

df = pd.DataFrame(
    [
        [1, 2, 3],
        [4, 5, 6],
    ],
    columns=['a', 'b', 'c']
)
uri = "postgresql://postgres:postgres@localhost/postgres"
with pg_dbapi.connect(uri) as conn:
    df.to_sql("pandas_table", conn, index=False)

# for round-tripping
with pg_dbapi.connect(uri) as conn:
    df2 = pd.read_sql("pandas_table", conn)

The Arrow type system offers a wider array of types that can more closely match what databases like PostgreSQL can offer. To illustrate, note this (non-exhaustive) listing of types available in different databases and pandas backends:

numpy/pandas

arrow

postgres

sqlite

int16/Int16

int16

SMALLINT

INTEGER

int32/Int32

int32

INTEGER

INTEGER

int64/Int64

int64

BIGINT

INTEGER

float32

float32

REAL

REAL

float64

float64

DOUBLE PRECISION

REAL

object

string

TEXT

TEXT

bool

bool_

BOOLEAN

datetime64[ns]

timestamp(us)

TIMESTAMP

datetime64[ns,tz]

timestamp(us,tz)

TIMESTAMPTZ

date32

DATE

month_day_nano_interval

INTERVAL

binary

BINARY

BLOB

decimal128

DECIMAL [1]

list

ARRAY [1]

struct

COMPOSITE TYPE

[1]

Footnotes

If you are interested in preserving database types as best as possible throughout the lifecycle of your DataFrame, users are encouraged to leverage the dtype_backend="pyarrow" argument of read_sql()

# for round-tripping
with pg_dbapi.connect(uri) as conn:
    df2 = pd.read_sql("pandas_table", conn, dtype_backend="pyarrow")

This will prevent your data from being converted to the traditional pandas/NumPy type system, which often converts SQL types in ways that make them impossible to round-trip.

For a full list of ADBC drivers and their development status, see the ADBC Driver Implementation Status documentation.

Create a pandas Series based on one or more conditions#

The Series.case_when() function has been added to create a Series object based on one or more conditions. (GH 39154)

In [1]: import pandas as pd

In [2]: df = pd.DataFrame(dict(a=[1, 2, 3], b=[4, 5, 6]))

In [3]: default=pd.Series('default', index=df.index)

In [4]: default.case_when(
   ...:      caselist=[
   ...:          (df.a == 1, 'first'),                              # condition, replacement
   ...:          (df.a.gt(1) & df.b.eq(5), 'second'),  # condition, replacement
   ...:      ],
   ...: )
   ...: 
Out[4]: 
0      first
1     second
2    default
dtype: object

to_numpy for NumPy nullable and Arrow types converts to suitable NumPy dtype#

to_numpy for NumPy nullable and Arrow types will now convert to a suitable NumPy dtype instead of object dtype for nullable and PyArrow backed extension dtypes.

Old behavior:

In [1]: ser = pd.Series([1, 2, 3], dtype="Int64")
In [2]: ser.to_numpy()
Out[2]: array([1, 2, 3], dtype=object)

New behavior:

In [5]: ser = pd.Series([1, 2, 3], dtype="Int64")

In [6]: ser.to_numpy()
Out[6]: array([1, 2, 3])

In [7]: ser = pd.Series([1, 2, 3], dtype="timestamp[ns][pyarrow]")

In [8]: ser.to_numpy()
Out[8]: 
array(['1970-01-01T00:00:00.000000001', '1970-01-01T00:00:00.000000002',
       '1970-01-01T00:00:00.000000003'], dtype='datetime64[ns]')

The default NumPy dtype (without any arguments) is determined as follows:

  • float dtypes are cast to NumPy floats

  • integer dtypes without missing values are cast to NumPy integer dtypes

  • integer dtypes with missing values are cast to NumPy float dtypes and NaN is used as missing value indicator

  • boolean dtypes without missing values are cast to NumPy bool dtype

  • boolean dtypes with missing values keep object dtype

  • datetime and timedelta types are cast to Numpy datetime64 and timedelta64 types respectively and NaT is used as missing value indicator

Series.struct accessor for PyArrow structured data#

The Series.struct accessor provides attributes and methods for processing data with struct[pyarrow] dtype Series. For example, Series.struct.explode() converts PyArrow structured data to a pandas DataFrame. (GH 54938)

In [9]: import pyarrow as pa

In [10]: series = pd.Series(
   ....:     [
   ....:         {"project": "pandas", "version": "2.2.0"},
   ....:         {"project": "numpy", "version": "1.25.2"},
   ....:         {"project": "pyarrow", "version": "13.0.0"},
   ....:     ],
   ....:     dtype=pd.ArrowDtype(
   ....:         pa.struct([
   ....:             ("project", pa.string()),
   ....:             ("version", pa.string()),
   ....:         ])
   ....:     ),
   ....: )
   ....: 

In [11]: series.struct.explode()
Out[11]: 
   project version
0   pandas   2.2.0
1    numpy  1.25.2
2  pyarrow  13.0.0

Use Series.struct.field() to index into a (possible nested) struct field.

In [12]: series.struct.field("project")
Out[12]: 
0     pandas
1      numpy
2    pyarrow
Name: project, dtype: string[pyarrow]

Series.list accessor for PyArrow list data#

The Series.list accessor provides attributes and methods for processing data with list[pyarrow] dtype Series. For example, Series.list.__getitem__() allows indexing pyarrow lists in a Series. (GH 55323)

In [13]: import pyarrow as pa

In [14]: series = pd.Series(
   ....:     [
   ....:         [1, 2, 3],
   ....:         [4, 5],
   ....:         [6],
   ....:     ],
   ....:     dtype=pd.ArrowDtype(
   ....:         pa.list_(pa.int64())
   ....:     ),
   ....: )
   ....: 

In [15]: series.list[0]
Out[15]: 
0    1
1    4
2    6
dtype: int64[pyarrow]

Calamine engine for read_excel()#

The calamine engine was added to read_excel(). It uses python-calamine, which provides Python bindings for the Rust library calamine. This engine supports Excel files (.xlsx, .xlsm, .xls, .xlsb) and OpenDocument spreadsheets (.ods) (GH 50395).

There are two advantages of this engine:

  1. Calamine is often faster than other engines, some benchmarks show results up to 5x faster than ‘openpyxl’, 20x - ‘odf’, 4x - ‘pyxlsb’, and 1.5x - ‘xlrd’. But, ‘openpyxl’ and ‘pyxlsb’ are faster in reading a few rows from large files because of lazy iteration over rows.

  2. Calamine supports the recognition of datetime in .xlsb files, unlike ‘pyxlsb’ which is the only other engine in pandas that can read .xlsb files.

pd.read_excel("path_to_file.xlsb", engine="calamine")

For more, see Calamine (Excel and ODS files) in the user guide on IO tools.

Other enhancements#

Notable bug fixes#

These are bug fixes that might have notable behavior changes.

merge() and DataFrame.join() now consistently follow documented sort behavior#

In previous versions of pandas, merge() and DataFrame.join() did not always return a result that followed the documented sort behavior. pandas now follows the documented sort behavior in merge and join operations (GH 54611, GH 56426, GH 56443).

As documented, sort=True sorts the join keys lexicographically in the resulting DataFrame. With sort=False, the order of the join keys depends on the join type (how keyword):

  • how="left": preserve the order of the left keys

  • how="right": preserve the order of the right keys

  • how="inner": preserve the order of the left keys

  • how="outer": sort keys lexicographically

One example with changing behavior is inner joins with non-unique left join keys and sort=False:

In [16]: left = pd.DataFrame({"a": [1, 2, 1]})

In [17]: right = pd.DataFrame({"a": [1, 2]})

In [18]: result = pd.merge(left, right, how="inner", on="a", sort=False)

Old Behavior

In [5]: result
Out[5]:
   a
0  1
1  1
2  2

New Behavior

In [19]: result
Out[19]: 
   a
0  1
1  2
2  1

merge() and DataFrame.join() no longer reorder levels when levels differ#

In previous versions of pandas, merge() and DataFrame.join() would reorder index levels when joining on two indexes with different levels (GH 34133).

In [20]: left = pd.DataFrame({"left": 1}, index=pd.MultiIndex.from_tuples([("x", 1), ("x", 2)], names=["A", "B"]))

In [21]: right = pd.DataFrame({"right": 2}, index=pd.MultiIndex.from_tuples([(1, 1), (2, 2)], names=["B", "C"]))

In [22]: left
Out[22]: 
     left
A B      
x 1     1
  2     1

In [23]: right
Out[23]: 
     right
B C       
1 1      2
2 2      2

In [24]: result = left.join(right)

Old Behavior

In [5]: result
Out[5]:
       left  right
B A C
1 x 1     1      2
2 x 2     1      2

New Behavior

In [25]: result
Out[25]: 
       left  right
A B C             
x 1 1     1      2
  2 2     1      2

Increased minimum versions for dependencies#

For optional dependencies the general recommendation is to use the latest version. Optional dependencies below the lowest tested version may still work but are not considered supported. The following table lists the optional dependencies that have had their minimum tested version increased.

Package

New Minimum Version

beautifulsoup4

4.11.2

blosc

1.21.3

bottleneck

1.3.6

fastparquet

2022.12.0

fsspec

2022.11.0

gcsfs

2022.11.0

lxml

4.9.2

matplotlib

3.6.3

numba

0.56.4

numexpr

2.8.4

qtpy

2.3.0

openpyxl

3.1.0

psycopg2

2.9.6

pyreadstat

1.2.0

pytables

3.8.0

pyxlsb

1.0.10

s3fs

2022.11.0

scipy

1.10.0

sqlalchemy

2.0.0

tabulate

0.9.0

xarray

2022.12.0

xlsxwriter

3.0.5

zstandard

0.19.0

pyqt5

5.15.8

tzdata

2022.7

See Dependencies and Optional dependencies for more.

Other API changes#

Deprecations#

Chained assignment#

In preparation of larger upcoming changes to the copy / view behaviour in pandas 3.0 (Copy-on-Write (CoW), PDEP-7), we started deprecating chained assignment.

Chained assignment occurs when you try to update a pandas DataFrame or Series through two subsequent indexing operations. Depending on the type and order of those operations this currently does or does not work.

A typical example is as follows:

df = pd.DataFrame({"foo": [1, 2, 3], "bar": [4, 5, 6]})

# first selecting rows with a mask, then assigning values to a column
# -> this has never worked and raises a SettingWithCopyWarning
df[df["bar"] > 5]["foo"] = 100

# first selecting the column, and then assigning to a subset of that column
# -> this currently works
df["foo"][df["bar"] > 5] = 100

This second example of chained assignment currently works to update the original df. This will no longer work in pandas 3.0, and therefore we started deprecating this:

>>> df["foo"][df["bar"] > 5] = 100
FutureWarning: ChainedAssignmentError: behaviour will change in pandas 3.0!
You are setting values through chained assignment. Currently this works in certain cases, but when using Copy-on-Write (which will become the default behaviour in pandas 3.0) this will never work to update the original DataFrame or Series, because the intermediate object on which we are setting values will behave as a copy.
A typical example is when you are setting values in a column of a DataFrame, like:

df["col"][row_indexer] = value

Use `df.loc[row_indexer, "col"] = values` instead, to perform the assignment in a single step and ensure this keeps updating the original `df`.

See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy

You can fix this warning and ensure your code is ready for pandas 3.0 by removing the usage of chained assignment. Typically, this can be done by doing the assignment in a single step using for example .loc. For the example above, we can do:

df.loc[df["bar"] > 5, "foo"] = 100

The same deprecation applies to inplace methods that are done in a chained manner, such as:

>>> df["foo"].fillna(0, inplace=True)
FutureWarning: A value is trying to be set on a copy of a DataFrame or Series through chained assignment using an inplace method.
The behavior will change in pandas 3.0. This inplace method will never work because the intermediate object on which we are setting values always behaves as a copy.

For example, when doing 'df[col].method(value, inplace=True)', try using 'df.method({col: value}, inplace=True)' or df[col] = df[col].method(value) instead, to perform the operation inplace on the original object.

When the goal is to update the column in the DataFrame df, the alternative here is to call the method on df itself, such as df.fillna({"foo": 0}, inplace=True).

See more details in the migration guide.

Deprecate aliases M, Q, Y, etc. in favour of ME, QE, YE, etc. for offsets#

Deprecated the following frequency aliases (GH 9586):

offsets

deprecated aliases

new aliases

MonthEnd

M

ME

BusinessMonthEnd

BM

BME

SemiMonthEnd

SM

SME

CustomBusinessMonthEnd

CBM

CBME

QuarterEnd

Q

QE

BQuarterEnd

BQ

BQE

YearEnd

Y

YE

BYearEnd

BY

BYE

For example:

Previous behavior:

In [8]: pd.date_range('2020-01-01', periods=3, freq='Q-NOV')
Out[8]:
DatetimeIndex(['2020-02-29', '2020-05-31', '2020-08-31'],
              dtype='datetime64[ns]', freq='Q-NOV')

Future behavior:

In [26]: pd.date_range('2020-01-01', periods=3, freq='QE-NOV')
Out[26]: DatetimeIndex(['2020-02-29', '2020-05-31', '2020-08-31'], dtype='datetime64[ns]', freq='QE-NOV')

Deprecated automatic downcasting#

Deprecated the automatic downcasting of object dtype results in a number of methods. These would silently change the dtype in a hard to predict manner since the behavior was value dependent. Additionally, pandas is moving away from silent dtype changes (GH 54710, GH 54261).

These methods are:

Explicitly call DataFrame.infer_objects() to replicate the current behavior in the future.

result = result.infer_objects(copy=False)

Or explicitly cast all-round floats to ints using astype.

Set the following option to opt into the future behavior:

In [9]: pd.set_option("future.no_silent_downcasting", True)

Other Deprecations#

Performance improvements#

Bug fixes#

Categorical#

  • Categorical.isin() raising InvalidIndexError for categorical containing overlapping Interval values (GH 34974)

  • Bug in CategoricalDtype.__eq__() returning False for unordered categorical data with mixed types (GH 55468)

  • Bug when casting pa.dictionary to CategoricalDtype using a pa.DictionaryArray as categories (GH 56672)

Datetimelike#

  • Bug in DatetimeIndex construction when passing both a tz and either dayfirst or yearfirst ignoring dayfirst/yearfirst (GH 55813)

  • Bug in DatetimeIndex when passing an object-dtype ndarray of float objects and a tz incorrectly localizing the result (GH 55780)

  • Bug in Series.isin() with DatetimeTZDtype dtype and comparison values that are all NaT incorrectly returning all-False even if the series contains NaT entries (GH 56427)

  • Bug in concat() raising AttributeError when concatenating all-NA DataFrame with DatetimeTZDtype dtype DataFrame (GH 52093)

  • Bug in testing.assert_extension_array_equal() that could use the wrong unit when comparing resolutions (GH 55730)

  • Bug in to_datetime() and DatetimeIndex when passing a list of mixed-string-and-numeric types incorrectly raising (GH 55780)

  • Bug in to_datetime() and DatetimeIndex when passing mixed-type objects with a mix of timezones or mix of timezone-awareness failing to raise ValueError (GH 55693)

  • Bug in Tick.delta() with very large ticks raising OverflowError instead of OutOfBoundsTimedelta (GH 55503)

  • Bug in DatetimeIndex.shift() with non-nanosecond resolution incorrectly returning with nanosecond resolution (GH 56117)

  • Bug in DatetimeIndex.union() returning object dtype for tz-aware indexes with the same timezone but different units (GH 55238)

  • Bug in Index.is_monotonic_increasing() and Index.is_monotonic_decreasing() always caching Index.is_unique() as True when first value in index is NaT (GH 55755)

  • Bug in Index.view() to a datetime64 dtype with non-supported resolution incorrectly raising (GH 55710)

  • Bug in Series.dt.round() with non-nanosecond resolution and NaT entries incorrectly raising OverflowError (GH 56158)

  • Bug in Series.fillna() with non-nanosecond resolution dtypes and higher-resolution vector values returning incorrect (internally-corrupted) results (GH 56410)

  • Bug in Timestamp.unit() being inferred incorrectly from an ISO8601 format string with minute or hour resolution and a timezone offset (GH 56208)

  • Bug in .astype converting from a higher-resolution datetime64 dtype to a lower-resolution datetime64 dtype (e.g. datetime64[us]->datetime64[ms]) silently overflowing with values near the lower implementation bound (GH 55979)

  • Bug in adding or subtracting a Week offset to a datetime64 Series, Index, or DataFrame column with non-nanosecond resolution returning incorrect results (GH 55583)

  • Bug in addition or subtraction of BusinessDay offset with offset attribute to non-nanosecond Index, Series, or DataFrame column giving incorrect results (GH 55608)

  • Bug in addition or subtraction of DateOffset objects with microsecond components to datetime64 Index, Series, or DataFrame columns with non-nanosecond resolution (GH 55595)

  • Bug in addition or subtraction of very large Tick objects with Timestamp or Timedelta objects raising OverflowError instead of OutOfBoundsTimedelta (GH 55503)

  • Bug in creating a Index, Series, or DataFrame with a non-nanosecond DatetimeTZDtype and inputs that would be out of bounds with nanosecond resolution incorrectly raising OutOfBoundsDatetime (GH 54620)

  • Bug in creating a Index, Series, or DataFrame with a non-nanosecond datetime64 (or DatetimeTZDtype) from mixed-numeric inputs treating those as nanoseconds instead of as multiples of the dtype’s unit (which would happen with non-mixed numeric inputs) (GH 56004)

  • Bug in creating a Index, Series, or DataFrame with a non-nanosecond datetime64 dtype and inputs that would be out of bounds for a datetime64[ns] incorrectly raising OutOfBoundsDatetime (GH 55756)

  • Bug in parsing datetime strings with nanosecond resolution with non-ISO8601 formats incorrectly truncating sub-microsecond components (GH 56051)

  • Bug in parsing datetime strings with sub-second resolution and trailing zeros incorrectly inferring second or millisecond resolution (GH 55737)

  • Bug in the results of to_datetime() with an floating-dtype argument with unit not matching the pointwise results of Timestamp (GH 56037)

  • Fixed regression where concat() would raise an error when concatenating datetime64 columns with differing resolutions (GH 53641)

Timedelta#

  • Bug in Timedelta construction raising OverflowError instead of OutOfBoundsTimedelta (GH 55503)

  • Bug in rendering (__repr__) of TimedeltaIndex and Series with timedelta64 values with non-nanosecond resolution entries that are all multiples of 24 hours failing to use the compact representation used in the nanosecond cases (GH 55405)

Timezones#

  • Bug in AbstractHolidayCalendar where timezone data was not propagated when computing holiday observances (GH 54580)

  • Bug in Timestamp construction with an ambiguous value and a pytz timezone failing to raise pytz.AmbiguousTimeError (GH 55657)

  • Bug in Timestamp.tz_localize() with nonexistent="shift_forward around UTC+0 during DST (GH 51501)

Numeric#

Conversion#

  • Bug in DataFrame.astype() when called with str on unpickled array - the array might change in-place (GH 54654)

  • Bug in DataFrame.astype() where errors="ignore" had no effect for extension types (GH 54654)

  • Bug in Series.convert_dtypes() not converting all NA column to null[pyarrow] (GH 55346)

  • Bug in :meth:DataFrame.loc was not throwing “incompatible dtype warning” (see PDEP6) when assigning a Series with a different dtype using a full column setter (e.g. df.loc[:, 'a'] = incompatible_value) (GH 39584)

Strings#

Interval#

Indexing#

Missing#

MultiIndex#

I/O#

Period#

  • Bug in PeriodIndex construction when more than one of data, ordinal and **fields are passed failing to raise ValueError (GH 55961)

  • Bug in Period addition silently wrapping around instead of raising OverflowError (GH 55503)

  • Bug in casting from PeriodDtype with astype to datetime64 or DatetimeTZDtype with non-nanosecond unit incorrectly returning with nanosecond unit (GH 55958)

Plotting#

Groupby/resample/rolling#

Reshaping#

Sparse#

  • Bug in arrays.SparseArray.take() when using a different fill value than the array’s fill value (GH 55181)

Other#

Contributors#

A total of 162 people contributed patches to this release. People with a “+” by their names contributed a patch for the first time.

  • AG

  • Aaron Rahman +

  • Abdullah Ihsan Secer +

  • Abhijit Deo +

  • Adrian D’Alessandro

  • Ahmad Mustafa Anis +

  • Amanda Bizzinotto

  • Amith KK +

  • Aniket Patil +

  • Antonio Fonseca +

  • Artur Barseghyan

  • Ben Greiner

  • Bill Blum +

  • Boyd Kane

  • Damian Kula

  • Dan King +

  • Daniel Weindl +

  • Daniele Nicolodi

  • David Poznik

  • David Toneian +

  • Dea María Léon

  • Deepak George +

  • Dmitriy +

  • Dominique Garmier +

  • Donald Thevalingam +

  • Doug Davis +

  • Dukastlik +

  • Elahe Sharifi +

  • Eric Han +

  • Fangchen Li

  • Francisco Alfaro +

  • Gadea Autric +

  • Guillaume Lemaitre

  • Hadi Abdi Khojasteh

  • Hedeer El Showk +

  • Huanghz2001 +

  • Isaac Virshup

  • Issam +

  • Itay Azolay +

  • Itayazolay +

  • Jaca +

  • Jack McIvor +

  • JackCollins91 +

  • James Spencer +

  • Jay

  • Jessica Greene

  • Jirka Borovec +

  • JohannaTrost +

  • John C +

  • Joris Van den Bossche

  • José Lucas Mayer +

  • José Lucas Silva Mayer +

  • João Andrade +

  • Kai Mühlbauer

  • Katharina Tielking, MD +

  • Kazuto Haruguchi +

  • Kevin

  • Lawrence Mitchell

  • Linus +

  • Linus Sommer +

  • Louis-Émile Robitaille +

  • Luke Manley

  • Lumberbot (aka Jack)

  • Maggie Liu +

  • MainHanzo +

  • Marc Garcia

  • Marco Edward Gorelli

  • MarcoGorelli

  • Martin Šícho +

  • Mateusz Sokół

  • Matheus Felipe +

  • Matthew Roeschke

  • Matthias Bussonnier

  • Maxwell Bileschi +

  • Michael Tiemann

  • Michał Górny

  • Molly Bowers +

  • Moritz Schubert +

  • NNLNR +

  • Natalia Mokeeva

  • Nils Müller-Wendt +

  • Omar Elbaz

  • Pandas Development Team

  • Paras Gupta +

  • Parthi

  • Patrick Hoefler

  • Paul Pellissier +

  • Paul Uhlenbruck +

  • Philip Meier

  • Philippe THOMY +

  • Quang Nguyễn

  • Raghav

  • Rajat Subhra Mukherjee

  • Ralf Gommers

  • Randolf Scholz +

  • Richard Shadrach

  • Rob +

  • Rohan Jain +

  • Ryan Gibson +

  • Sai-Suraj-27 +

  • Samuel Oranyeli +

  • Sara Bonati +

  • Sebastian Berg

  • Sergey Zakharov +

  • Shyamala Venkatakrishnan +

  • StEmGeo +

  • Stefanie Molin

  • Stijn de Gooijer +

  • Thiago Gariani +

  • Thomas A Caswell

  • Thomas Baumann +

  • Thomas Guillet +

  • Thomas Lazarus +

  • Thomas Li

  • Tim Hoffmann

  • Tim Swast

  • Tom Augspurger

  • Toro +

  • Torsten Wörtwein

  • Ville Aikas +

  • Vinita Parasrampuria +

  • Vyas Ramasubramani +

  • William Andrea

  • William Ayd

  • Willian Wang +

  • Xiao Yuan

  • Yao Xiao

  • Yves Delley

  • Zemux1613 +

  • Ziad Kermadi +

  • aaron-robeson-8451 +

  • aram-cinnamon +

  • caneff +

  • ccccjone +

  • chris-caballero +

  • cobalt

  • color455nm +

  • denisrei +

  • dependabot[bot]

  • jbrockmendel

  • jfadia +

  • johanna.trost +

  • kgmuzungu +

  • mecopur +

  • mhb143 +

  • morotti +

  • mvirts +

  • omar-elbaz

  • paulreece

  • pre-commit-ci[bot]

  • raj-thapa

  • rebecca-palmer

  • rmhowe425

  • rohanjain101

  • shiersansi +

  • smij720

  • srkds +

  • taytzehao

  • torext

  • vboxuser +

  • xzmeng +

  • yashb +