Open dredozubov opened 7 years ago
Also I should note it's a breaking change. This will force some new constraints on the user, but I consider it a lesser evil.
I have actually a use case for this. I am working on a DataFrames library and will probably choose superrrecord for as a backend for some parts. Having CSVs with > 128 columns is not super uncommon.
We will want to support this, but at the moment there are some problems with larger records that cause massive compile time slow downs. We have to investigate those first before we can drill down on this.
okay, no rush. It's not even close to being finished.
I've created an issue to dig into the compilation time blow-up, in a sense it blocks this PR(but maybe it isn't?) https://github.com/agrafix/superrecord/issues/12
An exact number of fields is statically known, so there's no other reason that performance to choose
SmallArray#
overArray#
. The issue withSmallArray#
is an implicit maximum bound on the number of fields(<=128). Happily, it's possible to use multiple backends for the records of different size. Tests pass, but I want to add some new ones for the huge records specifically. Benchmarking won't hurt either.This change is