DataFrame.to_orc(path=None, *, engine='pyarrow', index=None, engine_kwargs=None)[source]#

Write a DataFrame to the ORC format.

New in version 1.5.0.

pathstr, file-like object or None, default None

If a string, it will be used as Root Directory path when writing a partitioned dataset. By file-like object, we refer to objects with a write() method, such as a file handle (e.g. via builtin open function). If path is None, a bytes object is returned.

enginestr, default ‘pyarrow’

ORC library to use. Pyarrow must be >= 7.0.0.

indexbool, optional

If True, include the dataframe’s index(es) in the file output. If False, they will not be written to the file. If None, similar to infer the dataframe’s index(es) will be saved. However, instead of being saved as values, the RangeIndex will be stored as a range in the metadata so it doesn’t require much space and is faster. Other indexes will be included as columns in the file output.

engine_kwargsdict[str, Any] or None, default None

Additional keyword arguments passed to pyarrow.orc.write_table().

bytes if no path argument is provided else None

Dtype of one or more columns is category, unsigned integers, interval, period or sparse.


engine is not pyarrow.

See also


Read a ORC file.


Write a parquet file.


Write a csv file.


Write to a sql table.


Write to hdf.



>>> df = pd.DataFrame(data={'col1': [1, 2], 'col2': [4, 3]})
>>> df.to_orc('df.orc')  
>>> pd.read_orc('df.orc')  
   col1  col2
0     1     4
1     2     3

If you want to get a buffer to the orc content you can write it to io.BytesIO >>> import io >>> b = io.BytesIO(df.to_orc()) # doctest: +SKIP >>> # doctest: +SKIP 0 >>> content = # doctest: +SKIP