Closed tobiasmuehl closed 2 years ago
Am I just supposed to take your word for it? :-) Could you please show an example?
xsv
should never crash on any input.
It might return errors in some cases. And what to do in those cases depends on what you're trying to do. But you didn't give any examples nor did you describe what you're trying to do nor the output of xsv
when it crashed.
(If it were possible, I would love to have a --please-no-bugs
that one could pass to any program to prevent it from crashing. :P)
My apologies, I picked the wrong word. xsv
indeed doesn't crash - it exits with an error, which is correct, as my input CSV file is malformed (too many columns). In looking into this problem I stumbled upon csvclean from csvkit which can be used to fix/remove bad lines from my CSV file. Given that xsv
already has the fixlengths command, perhaps adding a flag to skip lines with errors wouldn't be too much trouble?
OK, so to be clear here, you're running into an error because your CSV data has multiple rows with varying numbers of columns. The xsv fixlengths
command is what you should be using to resolve that. If you read the xsv fixlengths --help
docs, you'll notice that there is a bit of work that needs to be done to "fix" the CSV data.
So yes, it was very much intentional that xsv
has no way to "skip" bad records. In particular, there is no obvious way to know which records are bad and which aren't. There is a fundamental ambiguity here. This isn't necessarily about your CSV data being malformed so much as xsv
has no obvious way to interpret it. It might be obvious to you, but that's why xsv fixlengths
exists. Use that to make it obvious to xsv
.
When a CSV file is formatted incorrectly, xsv will crash. It would be very handy if there is a parameter that excludes those lines from processing, but continues processing the rest of the file