Open cboettig opened 1 year ago
The discovery code is roughly:
def discover_dataset(directory):
files = list_files_recursive(directory)
if unify_schemas:
return unify([get_schema(file) for file in files])
else:
return get_schema(files[0])
I think the problem is that all of this time is being spent in list_files_recursive
. There may be two problems:
I'm going to investigate #1
a bit.
Ok. I've filed #34213 for #1
. I suspect, if that is solved, #2
won't be very noticeable until you have tens of thousands of files. Either way we can leave this issue open for addressing #2
.
Is it possible somehow to order ds.dataset(...)
to not run its discovery/schema inference procedure?
This procedure is taking way too long when the dataset folder contains thousands of files.
I tried passing schema
and exclude_invalid_files=False
but it does not help.
Describe the bug, including details regarding any error messages, version, and platform.
Consider the following reprex, in which we open a partitioned parquet dataset on a remote S3 bucket:
This takes a whooping 102 seconds on my machine. I believe most of the computation is associated with checking the metadata found in each parquet file, since there are many individual partitions in this data. This process should not be necessary though if we provide the schema:
But observe the execution time is once again 102 seconds. Note that if we manually specify a single partition, the process takes only 2.7 seconds:
I would have expected similar performance between these two cases:
ds.dataset()
should just be establishing a connection with the schema and not reading any data. After all part of the promise of partitioned data is that we could open the dataset at the root of the parquet db and rely on filtering operations to extract a specific subset without the code ever touching all the other parquet files.My guess here is that arrow is trying to read all the parquet file metadata to ensure they all match the schema, even though that is not the expected behavior. I think this is the same issue as seen in R https://github.com/apache/arrow/issues/33312. But maybe I'm not doing something correct or miss-understand the expected behavior?
Component(s)
Parquet, Python