Series.
to_json
Convert the object to a JSON string.
Note NaN’s and None will be converted to null and datetime objects will be converted to UNIX timestamps.
File path or object. If not specified, the result is returned as a string.
Indication of expected JSON string format.
Series:
default is ‘index’ allowed values are: {‘split’, ‘records’, ‘index’, ‘table’}.
default is ‘index’
allowed values are: {‘split’, ‘records’, ‘index’, ‘table’}.
DataFrame:
default is ‘columns’ allowed values are: {‘split’, ‘records’, ‘index’, ‘columns’, ‘values’, ‘table’}.
default is ‘columns’
allowed values are: {‘split’, ‘records’, ‘index’, ‘columns’, ‘values’, ‘table’}.
The format of the JSON string:
‘split’ : dict like {‘index’ -> [index], ‘columns’ -> [columns], ‘data’ -> [values]} ‘records’ : list like [{column -> value}, … , {column -> value}] ‘index’ : dict like {index -> {column -> value}} ‘columns’ : dict like {column -> {index -> value}} ‘values’ : just the values array ‘table’ : dict like {‘schema’: {schema}, ‘data’: {data}} Describing the data, where data component is like orient='records'.
‘split’ : dict like {‘index’ -> [index], ‘columns’ -> [columns], ‘data’ -> [values]}
‘records’ : list like [{column -> value}, … , {column -> value}]
‘index’ : dict like {index -> {column -> value}}
‘columns’ : dict like {column -> {index -> value}}
‘values’ : just the values array
‘table’ : dict like {‘schema’: {schema}, ‘data’: {data}}
Describing the data, where data component is like orient='records'.
orient='records'
Type of date conversion. ‘epoch’ = epoch milliseconds, ‘iso’ = ISO8601. The default depends on the orient. For orient='table', the default is ‘iso’. For all other orients, the default is ‘epoch’.
orient='table'
The number of decimal places to use when encoding floating point values.
Force encoded string to be ASCII.
The time unit to encode to, governs timestamp and ISO8601 precision. One of ‘s’, ‘ms’, ‘us’, ‘ns’ for second, millisecond, microsecond, and nanosecond respectively.
Handler to call if object cannot otherwise be converted to a suitable format for JSON. Should receive a single argument which is the object to convert and return a serialisable object.
If ‘orient’ is ‘records’ write out line delimited json format. Will throw ValueError if incorrect ‘orient’ since others are not list like.
A string representing the compression to use in the output file, only used when the first argument is a filename. By default, the compression is inferred from the filename.
Changed in version 0.24.0: ‘infer’ option added and set to default
Whether to include the index values in the JSON string. Not including the index (index=False) is only supported when orient is ‘split’ or ‘table’.
index=False
Length of whitespace used to indent each record.
New in version 1.0.0.
Extra options that make sense for a particular storage connection, e.g. host, port, username, password, etc., if using a URL that will be parsed by fsspec, e.g., starting “s3://”, “gcs://”. An error will be raised if providing this argument with a non-fsspec URL. See the fsspec and backend storage implementation docs for the set of allowed keys and values.
fsspec
New in version 1.2.0.
If path_or_buf is None, returns the resulting json format as a string. Otherwise returns None.
See also
read_json
Convert a JSON string to pandas object.
Notes
The behavior of indent=0 varies from the stdlib, which does not indent the output but does insert newlines. Currently, indent=0 and the default indent=None are equivalent in pandas, though this may change in a future release.
indent=0
indent=None
orient='table' contains a ‘pandas_version’ field under ‘schema’. This stores the version of pandas used in the latest revision of the schema.
Examples
>>> import json >>> df = pd.DataFrame( ... [["a", "b"], ["c", "d"]], ... index=["row 1", "row 2"], ... columns=["col 1", "col 2"], ... )
>>> result = df.to_json(orient="split") >>> parsed = json.loads(result) >>> json.dumps(parsed, indent=4) { "columns": [ "col 1", "col 2" ], "index": [ "row 1", "row 2" ], "data": [ [ "a", "b" ], [ "c", "d" ] ] }
Encoding/decoding a Dataframe using 'records' formatted JSON. Note that index labels are not preserved with this encoding.
'records'
>>> result = df.to_json(orient="records") >>> parsed = json.loads(result) >>> json.dumps(parsed, indent=4) [ { "col 1": "a", "col 2": "b" }, { "col 1": "c", "col 2": "d" } ]
Encoding/decoding a Dataframe using 'index' formatted JSON:
'index'
>>> result = df.to_json(orient="index") >>> parsed = json.loads(result) >>> json.dumps(parsed, indent=4) { "row 1": { "col 1": "a", "col 2": "b" }, "row 2": { "col 1": "c", "col 2": "d" } }
Encoding/decoding a Dataframe using 'columns' formatted JSON:
'columns'
>>> result = df.to_json(orient="columns") >>> parsed = json.loads(result) >>> json.dumps(parsed, indent=4) { "col 1": { "row 1": "a", "row 2": "c" }, "col 2": { "row 1": "b", "row 2": "d" } }
Encoding/decoding a Dataframe using 'values' formatted JSON:
'values'
>>> result = df.to_json(orient="values") >>> parsed = json.loads(result) >>> json.dumps(parsed, indent=4) [ [ "a", "b" ], [ "c", "d" ] ]
Encoding with Table Schema:
>>> result = df.to_json(orient="table") >>> parsed = json.loads(result) >>> json.dumps(parsed, indent=4) { "schema": { "fields": [ { "name": "index", "type": "string" }, { "name": "col 1", "type": "string" }, { "name": "col 2", "type": "string" } ], "primaryKey": [ "index" ], "pandas_version": "0.20.0" }, "data": [ { "index": "row 1", "col 1": "a", "col 2": "b" }, { "index": "row 2", "col 1": "c", "col 2": "d" } ] }