issues
search
ddotta
/
parquetize
R package that allows to convert databases of different formats to parquet format
https://ddotta.github.io/parquetize/
68
stars
4
forks
source link
issues
Newest
Newest
Most commented
Recently updated
Oldest
Least commented
Least recently updated
Replace read_delim by read_delim_arrow
#56
ddotta
opened
6 months ago
1
add get_parquet_info
#55
nbc
closed
6 months ago
1
table_to_parquet() can now convert files with uppercase extensions
#54
ddotta
closed
9 months ago
2
Move arrow to Suggests
#53
thisisnic
closed
9 months ago
4
Fix error on fedora-clang OS
#52
ddotta
closed
6 months ago
2
rds gzfile cannot open connection
#51
ChristosMichaliaslis
closed
11 months ago
5
Conversions from SAS tables with extension names in uppercase doesn't work
#50
ddotta
closed
9 months ago
0
Adds argument `read_delim_args` to `csv_to_parquet`
#49
nikostr
closed
1 year ago
1
Improves documentation for `csv_to_parquet()` for txt files
#48
ddotta
closed
1 year ago
1
Specify minimal version for haven
#47
ddotta
closed
1 year ago
1
Specify minimal version for haven
#46
ddotta
closed
1 year ago
0
fix: remove single quotes in SQL statement
#45
leungi
closed
1 year ago
2
Add user_na argument in table_to_parquet function
#44
ddotta
closed
1 year ago
2
test: work on download_extract tests to limit the need to download files
#43
nbc
closed
1 year ago
2
fix: 503 errors in download_extract tests
#42
nbc
closed
1 year ago
1
evol: add an option to check arguments passed by user
#41
nbc
opened
1 year ago
3
table_to_parquet: SPSS-file is not correctly converted to .parquet when it has user defined missings
#40
Schakel17
closed
1 year ago
8
Compression and inheritParams
#39
ddotta
closed
1 year ago
1
Rely more on `@inheritParams` to simplify documentation of function arguments
#38
ddotta
closed
1 year ago
0
Group `@importFrom` in a file to facilitate their maintenance
#37
ddotta
closed
1 year ago
0
Arguments `compression` and `compression_level` are never passed to `write_parquet_at_once`
#36
ddotta
closed
1 year ago
0
feat: add fst_to_parquet function
#35
ddotta
closed
1 year ago
4
Feature/dbi and refactor
#34
nbc
closed
1 year ago
4
Add a `duckdb_to_parquet` using low level arrow functions
#33
ddotta
opened
1 year ago
3
Feature/dbi to parquet
#32
nbc
closed
1 year ago
1
Feature/deprecate chunk size
#31
nbc
closed
1 year ago
1
Feature/snapshots
#30
nbc
closed
1 year ago
1
Add `fst_to_parquet()` function to convert fst files to parquet format
#29
ddotta
closed
1 year ago
0
Add in the other functions of parquetize the functionality with chunks proposed in `table_to_parquet()`
#28
ddotta
opened
1 year ago
1
Feature/dbi
#27
nbc
closed
1 year ago
14
Feature/refactor
#26
nbc
closed
1 year ago
4
There are warnings in the snapshots for unit tests to be deleted
#25
ddotta
closed
1 year ago
0
Update vignette when PR #23 will be merged
#24
ddotta
closed
1 year ago
0
Feature/chunk by memory
#23
nbc
closed
1 year ago
4
Feature/allow chunk compression
#22
nbc
closed
1 year ago
2
fix: bug in bychunk logic
#21
nbc
closed
1 year ago
1
Added columns selection to `table_to_parquet()` and `csv_to_parquet()` functions
#20
ddotta
closed
1 year ago
1
Add the feature to be able to select only a certain number of columns
#19
ddotta
closed
1 year ago
0
Erreur conversion variable en raison de l'encodage
#18
PtiGourou26
closed
1 year ago
1
Solve problems sent by Brian Ripley (CRAN) for Linux
#17
ddotta
closed
1 year ago
1
Add metrics
#16
ddotta
opened
1 year ago
0
Use a callback function in read_by_chunk() ?
#15
ddotta
closed
1 year ago
0
Add the feature to convert duckdb files
#14
ddotta
closed
2 years ago
1
Add the feature to convert sqlite files
#13
ddotta
closed
2 years ago
0
Add the feature to convert json files
#12
ddotta
closed
2 years ago
0
Add the feature to convert txt files
#11
ddotta
closed
1 year ago
5
Add the feature to convert pickle files
#10
ddotta
closed
2 years ago
1
Improve code coverage with utilities functions
#9
ddotta
closed
2 years ago
0
Check if `path_to_parquet` exists
#8
py-b
closed
2 years ago
0
Add function to convert to parquet by partitioning with `write_dataset()`
#7
ddotta
closed
2 years ago
0
Next