HDFStore.append(key, value, format=None, axes=None, index=True, append=True, complib=None, complevel=None, columns=None, min_itemsize=None, nan_rep=None, chunksize=None, expectedrows=None, dropna=None, data_columns=None, encoding=None, errors='strict')[source]#

Append to Table in file.

Node must already exist and be Table format.


Key of object to append.

value{Series, DataFrame}

Value of object to append.

format‘table’ is the default

Format to use when storing object in HDFStore. Value can be one of:


Table format. Write as a PyTables Table structure which may perform worse but allow more flexible operations like searching / selecting subsets of the data.

indexbool, default True

Write DataFrame index as a column.

appendbool, default True

Append the input data to the existing.

data_columnslist of columns, or True, default None

List of columns to create as indexed data columns for on-disk queries, or True to use all columns. By default only the axes of the object are indexed. See here.

min_itemsizeint, dict, or None

Dict of columns that specify minimum str sizes.


Str to use as str nan representation.

chunksizeint or None

Size to chunk the writing.


Expected TOTAL row size of this table.

encodingdefault None

Provide an encoding for str.

dropnabool, default False, optional

Do not write an ALL nan row to the store settable by the option ‘io.hdf.dropna_table’.


Does not check if data being appended overlaps with existing data in the table, so be careful


>>> df1 = pd.DataFrame([[1, 2], [3, 4]], columns=["A", "B"])
>>> store = pd.HDFStore("store.h5", "w")  
>>> store.put("data", df1, format="table")  
>>> df2 = pd.DataFrame([[5, 6], [7, 8]], columns=["A", "B"])
>>> store.append("data", df2)  
>>> store.close()  
   A  B
0  1  2
1  3  4
0  5  6
1  7  8