-
When parquet contains fields that are structs (objects) with just a few values, or even a collection like a list or set underneath, it the browser should be able to show it.
-
### Checks
- [X] I have checked that this issue has not already been reported.
- [X] I have confirmed this bug exists on the [latest version](https://pypi.org/project/polars/) of Polars.
### Repro…
-
AWS has an option to export an RDS db snapshot to parquet files in an S3 bucket. But the resulting files are gzipped
Is it possible to directly load them with the fdw, or I need to run a batch job …
-
### Feature Request / Improvement
Currently, if I am using pyiceberg to create/maintain my iceberg tables and I use Trino (AWS Athena) to do compaction on the same (using Spark)- the files created vi…
-
**Is your feature request related to a problem or challenge? Please describe what you are trying to do.**
As we work on various features of Parquet metadata it is becoming clear that working with the…
alamb updated
3 months ago
-
To repro this, you have to introduce an artificial delay or use _a lot_ of rows.
-
starts locally
```bash
python -m buenavista.examples.duckdb_postgres
```
connect via postico
```sql
SET s3_endpoint = 'xx.com';
select count(*) from 's3://bucket/a.parquet';
```
it connects t…
-
### Describe the bug
Consider a snippet like this:
```rust
df.write_parquet(
"dir/data",
DataFrameWriteOptions::new().with_single_file_output(true),
None
).await
```
Before v43 this w…
-
**Is your feature request related to a problem? Please describe.**
If we have a really large dataframe that exceeds memory, and we need to process each part of it, `parquet` supports `batch_size`. I'…
-
### Checks
- [X] I have checked that this issue has not already been reported.
- [X] I have confirmed this bug exists on the [latest version](https://pypi.org/project/polars/) of Polars.
### Re…