Closed hagenw closed 4 months ago
Related to https://github.com/audeering/audb/issues/181
To answer the first question, we create a PARQUET file and a corresponding ZIP file, and compare their sizes.
NOTE: the following example requires the dev
branch of audb
at the moment.
import audb
import audeer
import os
deps = audb.dependencies("musan", version="1.0.0")
parquet_file = "deps.parquet"
zip_file = "deps.zip"
deps.save(parquet_file)
audeer.create_archive(".", parquet_file, zip_file)
parquet_size = os.stat(parquet_file).st_size
zip_size = os.stat(zip_file).st_size
print(f"Parquet file size: {parquet_size >> 10:.0f} kB")
print(f"Zip file size: {zip_size >> 10:.0f} kB")
returns
Parquet file size: 175 kB
Zip file size: 130 kB
I repeated it with librispeech
3.1.0 from our internal repository to have an example of a bigger dataset:
Parquet file size: 21848 kB
Zip file size: 16163 kB
Regarding the second question, we would need to change the following code block in audb/core/publish.py
:
There we could simply use put_file()
instead of put_archive()
to not zip the file.
Slightly more complicated will be the case of loading the dependency table, as we might have a ZIP file or a PARQUET file on the server, which is not ideal. The affected code block is in audb/core/api.py
in the definition of audb.dependencies()
:
There we could first try to load the PARQUET file (or check if it exists), and otherwise load the ZIP file. An alternative approach would be to still use ZIP, but don't compress the file as proposed in https://github.com/audeering/audb/issues/181#issuecomment-1056854297
Then there are also two parts inside audb/core/api.py
inside remove_media()
:
To answer the third question, I created the benchmark script shown below, that tests different ways to store and load the dependency table on a dataset containing 292,381 files. When running the script, it returns:
parquet snappy
Writing time: 0.2501 s
Reading time: 0.1112 s
File size: 21848 kB
parquet snappy + zip no compression
Writing time: 0.2985 s
Reading time: 0.1290 s
File size: 21848 kB
parquet snappy + zip
Writing time: 1.1113 s
Reading time: 0.2630 s
File size: 16163 kB
parquet gzip
Writing time: 1.5897 s
Reading time: 0.1205 s
File size: 13524 kB
The zipped CSV file, currently used to store the dependency table of the same dataset has a size of 14390 kB.
"zip no compression" is referring to the solution proposed in #181, to still be able to upload the files as ZIP files to the server. In #181 we discuss media files, for which it is important to store them in a ZIP file, as we also have to preserve the underlying folder structure. This is not the case for the dependency table, and also the file extension will always be the same for the dependency table.
Our current approach is "parquet snappy + zip". If we switch to any of the other approaches, reading time would be halved. We can choose between using GZIP directly when creating the PARQUET file. This increases writing time, but reduces the files size. Or we could switch to using snappy compression, which decreases writing time, but will result in a larger file. @ChristianGeng any preference?
Or we could switch to using snappy compression, which decreases writing time, but will result in a larger file. @ChristianGeng any preference?
In general I think that disk storage is normally quite cheap, so I would find it a good move to be able to read data faster. So I would be open to depart from "parquet snappy + zip" and optimize for reading time by going into snappy direction.
The SOV post here suggests too that excessive zipping is for cold data. I think we have something in between - luke warm data - but CPU is normally more expensive. Apart from that the SOV post discusses "splittability". Concerning determinism in order to be able to md5sum a file I have not been able to answer.
I agree, that compressing the PARQUET file with SNAPPY and storing it directly on the backend seems to be the best solution. I created #398, that implements this proposal.
We decided to no longer zip the dependency table, and store it instead directly on the sever as implemented in #398.
In https://github.com/audeering/audb/pull/372 we introduced storing the dependency table as a PARQUET file, instead of a CSV file. When the file is uploaded to the server, still a ZIP file is created first. As PARQUET comes already with compression, we should check: