Closed jangorecki closed 5 years ago
writing too
Feather.write("./data/G1_1e9_1e2_0_0_jl.fea", x)
ERROR: InexactError: trunc(Int32, 2147483652)
Sorry for the delayed response. Unfortunately the Feather format itself does not support saving individual files larger than 4GB. I know, this really sucks. As far as I know the long term plan was that eventually there'd be some accommodation in the metadata for chaining together 4GB files.
My take on the situation is that Feather was developed before the Arrow format was mature, and what we wound up with is a format with rather messy metadata that doesn't really conform to the Arrow standard (which is why I think they've never addressed this). I've been toying with the idea of creating a new format where the metadata is fully compatible with the Arrow IPC metadata but as we haven't implemented the Arrow IPC metadata yet this would take a lot of work and I just haven't had the time to get into it yet.
Anyway, in the foreseeable future the only option is to break your data into 4GB chunks. I'm completely open to putting some "hack" into Feather.jl that does this more easily (and, most importantly, computes 4GB boundaries for the user, which is really the hardest part). I'll work on this myself if I ever really need it. If anyone else is interested in implementing it we'd welcome a PR.
I would suggest to give up on Feather and work out some alternative for serializing and deserializing dataframes. R's feather package is not even able to load 380MB files raising Error: C stack usage 7971012 is too close to the limit
exception. In python it segfault on 19GB data. Trying JLD2 now. Closing as this 4GB hard limit it is feather format issue, not Feather.jl really.
I am trying to read feather file, generated based on csv of size 40-50GB. There are 1e9 rows and 9 columns. Feather freshly installed from default repository.