Closed hagenw closed 4 weeks ago
As a side note, pyarrow
will become a dependency of pandas
anyway: https://github.com/pandas-dev/pandas/blob/main/web/pandas/pdeps/0010-required-pyarrow-dependency.md
So, it should be fine if we starting integrating pyarrow
based approaches here as well, e.g., storing dependencies as parquet files.
We have now benchmark results for comparing using csv, pickle, parquet files to store the dependency table available at https://github.com/audeering/audb/tree/a8bb3367a37fae79601e189ccac76a1a12105bae/benchmarks#audbdependencies-loadingwriting-to-file.
We first focus on the results for reading as this will be performed more often than writing.
pyarrow
dtypes for the internal dataframe representation of the dependency table, together with pyarrow.Table
for reading from csv and parquet filesWhen looking at writing performance we get:
Having those results in mind it seems to be reasonable to switch storing the dependency table directly as parquet files, both on the server and in cache.
Solved by #372.
For tables we support CSV to provide them in a human readable format, but this is not necessary for the dependency table. In addition, the dependency table is frequently accessed to gather basic information about a database.
I think it would make sense to switch to another format when storing it for new databases. It should be fast to read, and maybe support reading only parts like columns or rows of it to make sure it will always fit in memory.