Open djsutherland opened 4 years ago
Seems like in this case:
So parquet seems like the way to go. Unfortunately, doesn't seem to really be very appendable (since it's columnar). Could write it in chunks and then do a (probably quick) rewrite at the end. Or look into dask for everything (#23).
For now, manually converting hdf5 => parquet post-sorting and letting featurize
support either.
With the new two-pass scheme with the merge at the end, the state merger is fast, but puma merger is quite slow. Not sure whether this is due to casting categorical dtypes or just i/o.
Could merge in a separate thread as we go? Or again, maybe dask #23 solves this better.
Could also multiprocess the merging.
Seems like maybe pandas/pytables append is a lot slower than writing into a new file. (Or else the rewriting-when-strings-are-longer code is hitting a lot.)
The sort step should probably pre-count lines per PUMA in
stats
, and maybe max string lengths for the things that need that. Then we can preallocate file sizes and write into them, instead of appending.Probably should (also?) consider using feather or parquet instead of hdf5.