Unfortunately, loading csv files with pyarrow.csv.read_csv() as introduced in https://github.com/audeering/audformat/pull/419 is not as tolerant to malformed csv files as pandas.read_csv(). I have identified so far three cases in which loading of a csv file might fail (two of them are listed in #449):
Loading a csv file can fail, if the csv file contains more columns, as mentioned in the header of a database
Loading a csv file can also fail, if it is very long and contains a lot of special characters, like ", "", ,. I did not added a test for it, because it turns out that the syntax of the csv file is correct, and it works when splitting the file into smaller ones.
Loading of a csv file can fail, if it contain some offsets in there date values, e.g. +00:00, which is the case for some of our older datasets.
As pyarrow.csv.read_csv() cannot be easily extended to handle those cases, I use now a try-except statement, that falls back to loading the file with pandas.read_csv(). This is very unfortunate as it means when implementing a new feature (e.g. streaming) it needs to be implemented for both cases. But I don't know a better solution at the moment.
In principle, we could solve 1. and 3. by updating the databases, but you will still no longer be able to load old versions then, which is not acceptable. How we could solve 2. otherwise, I don't know.
Closes #449
Unfortunately, loading csv files with
pyarrow.csv.read_csv()
as introduced in https://github.com/audeering/audformat/pull/419 is not as tolerant to malformed csv files aspandas.read_csv()
. I have identified so far three cases in which loading of a csv file might fail (two of them are listed in #449):"
,""
,,
. I did not added a test for it, because it turns out that the syntax of the csv file is correct, and it works when splitting the file into smaller ones.date
values, e.g.+00:00
, which is the case for some of our older datasets.As
pyarrow.csv.read_csv()
cannot be easily extended to handle those cases, I use now atry
-except
statement, that falls back to loading the file withpandas.read_csv()
. This is very unfortunate as it means when implementing a new feature (e.g. streaming) it needs to be implemented for both cases. But I don't know a better solution at the moment.In principle, we could solve 1. and 3. by updating the databases, but you will still no longer be able to load old versions then, which is not acceptable. How we could solve 2. otherwise, I don't know.