Open jornfranke opened 9 months ago
Relevant point. For now, it's hardcoded with a simple configuration, and sure it's not satisfactory.
Defining the Map in the VTL script seems to be impossible.
I imagine two ways to enable custom configuration:
I suggest we discuss this during our future call.
Hi @jornfranke,
We discussed the possibilities with @hadrienk:
loadCSV
Java function defined and provided in the Kernel.define operator read(url string, sep string default ";", delimiter string default "'", header boolean default true)
returns dataset is
readCSV(sep, delimiter, header)
end operator;
And instantiate with read("path", _, _, false);
for instance.
At this point, parameters are only positional in VTL, which would give an ugly syntax if we want to expose many options as parameters (see this issue posted on the VTL TF repo).
loadCSV("path?header=false");
and handle them directly in the loadCSV
Java function provided in the Jupyter Kernel.What do you think?
Thanks for the feedback. We did internally some workaround and will look into the second option you propose. The first option would probably makes sense once Trevas supports it.
Currently, one can load/write data only with specific default options without being able to customize them.
For instancce, in loadCsv or writeCsv I cannot specify a different separator from ";". There are many other options relevant for reading/writing CSV files.
loadParquet/writeParquet does not allow to specify options, such as compression (cf. here).
I propose to have additional functions, e.g. loadCsvWithParameters that takes as input the path and a Map<String,String> which allows to specify any options. I am not exactly sure how one can pass a Map<String,String> in VTL initialized with data. Alternatively one can provide simply a "config" Dataset.class which contains in a dataset with two columns (key,value).