Open cawoodm opened 1 year ago
Hey @cawoodm, Anyone is working on this one ? I take work on it.
While I am not against the idea, I can't say that I fully support the idea. However, if you come up with a clean delimiter_auto
option, I'll probably merge it.
@wdavidw I looked on GitHub to see what other people were doing.
node-csv-string detect function, which basically looks for the first occurrence of one of the delimiters, I guess is fine for most cases.
Another more advanced implementation was the detect-csv determineMost function , which looks at a sample and returns the delimiter with the highest occurancy count.
What do you think ?
I would tend to discover the character, like in the second methods, after filtering any already used character in options (eg quotes, row delimiters, ...) and general ascii characters ([a-zA-z0-9]) (including accented characters).
@wdavidw
I created a small proof of concept for the auto_delimiter
option.
https://github.com/adaltas/node-csv/compare/master...hadyrashwan:node-csv:patch-1
When running tests, the below happens, not sure why:
packages/csv-parse/test/option.encoding.coffee
) on its own, it works.Question:
dist
files, is this expected ?Missing parts:
auto_delimiter
in the docs. Appreciate your feedback :)
I'll take some time to review later. In the mean time, what do you mean by "We are committing the dist files".
A few notes for now:
delimiter_auto
and not auto_delimiter
, __discoverDelimiterAuto
and not __autoDiscoverDelimiter
false
record_delimiter
(all those rules require tests)__needMoreData
, if delimiter_auto
is activated, and only in the first line, you shall allocated a safe buffer size dedicated to discovery__autoDiscoverDelimiter
function) just after bom handling and before actual parsing (https://github.com/adaltas/node-csv/blob/master/packages/csv-parse/lib/api/index.js#L109)I'll take some time to review later. In the mean time, what do you mean by "We are committing the dist files".
When I'm working I always see the build files in the dist folders added to git and not ignored.
Some projects add those build files in the git ignore file.
Just want to make sure that I'm not adding those files by mistake.
A couple of comments on the method of detecting delimiters:
Python has a great example of handling these in their implementation. The pandas
library uses this implementation but only reads from the first line of the file (here).
Are there any plans to open a PR for that? As far as I can see the current changes are only present on a branch.
I would definitely love to see that feature.
Here's another algorithm for detecting the delimiter that seems like a good idea: https://stackoverflow.com/a/19070276/2180570
I didn't have the time yet. The solution needs to deal with the streaming nature of the parser. The solution would be to extract a limited amount of bytes from the stream, apply a detection algorythm such as the one proposed above, then replay the bytes stored on the side with the detected delimiter. I need some time to do it correctly.
Also highly interested in that. My current workaround is to clone the original stream, read the first chunk on a duplicate, detect the delimiter, then get back and handle the first stream.
So basically:
const [mainStream, delimiterStream] = stream.tee();
const reader = delimiterStream.getReader();
const { value } = await reader.read();
const delimiter = discoverDelimiter(value);
reader.cancel();
return { delimiter, stream: mainStream };
Looks somewhat messy, but it works. I have to dynamically handle different types of files avoiding static configuration as much as possible.
Summary
The library should detect common delimiters
,
or;
.Motivation
Many international users will be working with commas or semi-colons on a daily basis depending on who generated their CSV. Currently we need to manually parse the first line and see which delimiter is used.
Alternative
Draft
Additional context
Since it may be difficult to detect "just any" delimiter the developer can pass an array of common delimiters which they expect.