Closed teocci closed 3 years ago
oh, yes. I had a similar problem. please save me...
Did you @amissu89 find a solution?
No, this is not a use case for csvutil. Your csv input requires to have the same number of fields in each row, otherwise mapping gets really complicated/ambiguous. If your rows were identical in size we could solve your header problem.
csvutil works on top of https://pkg.go.dev/encoding/csv, you will probably be better off using it directly. You will need to set FieldsPerRecord
to -1
otherwise it will give you the same error (wrong number of fields)
I parsed line by line using regex:
const regexRawNormalizer = `(?P<header>^[0-9].*\s{4}(?P<id>[[:xdigit:]]+),mavlink_[a-z_0-9]*),(?P<data>.*),,(?P<suffix>sig\s.*)`
There is no other way to parse this
I don't think that's true - https://play.golang.org/p/SzXidR00Q2T You can easily process exactly what you need with it
I believe I can close this issue now
I have a file this this one:
The structure of the file has a clear pattern for example:
2021-07-14T17:49:48.883,FE, 3, 0, 0, 4, 1, 1, A5,mavlink_hwstatus_t,Vcc,5111,I2Cerr,0,,sig ,Len,11,crc16,54392
in this line we can see we have the date we will call thatGPSTime
the we have seven hex digits that I don't know what they are or represent and I don't want to capture them Lets name them asStr02
untilStr08
. Then we have theMessageID
it is a hex digit but it has 4 or 5 spaces in front. Then we have theMessageName
and finally theMessageData
where it should be parsed thisVcc,5111,I2Cerr,0,,sig ,Len,11,crc16,54392
This is the struct I made
I followed your example:
When I run this code I got this error:
There is any way to process this kind of csv file using this library? There are three features that I need to process this file:
RawCSVData
struct: this means the first 10 fields or columsdec.Unused()
so we can process them as we need.omit{7}
in this case is a simple struct of 15 fields. But Imagine you have a file with 100 columns and you want to omit 90 of them. This means, you need to make an struct with 100 fields and usecsv:"-"
in 90 of them. I think is insane. It will be nice also to drop data that is not defined in the struct. For example if you have a file with rows like this:I also got
2021/10/20 14:09:52 wrong number of fields in record
At the end I created a new file and delete all the,,,,,,,,,,,,,
but this is not a good practice. That normalization might compromise the data.