-
Scenario is working with very large (many GB) csv files and wanting to save them to parquet files after processing without reading/collecting them entirely into memory.
I know how to do this with …
-
## Description
Data has been accidentally overwritten in the past, after copy pasting a catalog entry for deriving a new one, and forgetting to change the filepath. I feel it would be useful to prote…
-
It would be useful to have the ability to start an extraction job given from a database table which contains a list of extraction identifiers (e.g., SeriesInstanceUIDs)
-
Is there a way to export existing pages to csv/sql?
-
### What happens?
The SQL command:
```
CREATE MACRO read(fn) AS (SELECT * FROM read_csv_auto(fn));
```
results in a Binder Error:
```
Table function "read_csv_auto" does not support lateral j…
-
load-bulk-data-2024-05-07.sh is not working for me:
1. There are 80 tables from schema-2024-05-07.sql; there are quite fewer csv files than the tables. Many tables do not have respect csv files to …
-
I have some kind of complex SQL Select statement stored in SQL files, used to export data to CSV files. These files are share with other teams members using a VCS and ran/fixed/improved by any team me…
-
### What happens?
I'm having another look at the [1.1B taxi rides benchmark](https://tech.marksblogg.com/duckdb-1b-taxi-rides.html) with v1.0.0 that was just released on Windows 11. Since that post…
-
Hello! Sorry for the newbie question. How to use dag-factory with Astro Python SDK? Is this possible?
I'm trying to adapt the titanic example. There's a piece of code that loads the dataset:
```py…
-
Attended posit::conf 2024 workshop and ran into this issue with `duckplyr_df_from_csv`:
### Create example data
using first three rows of IMDB data
```
myData = readr::read_tsv(
"tconst…