Of course it's pretty silly to even attempt this, but I was attempting to compress every column in a table, one of which happened to be very sparse, and got this error:
[2022-10-13T16:28:08Z INFO sqlite_zstd::transparent] prediction_messages.eventEndedByType: Total 248696 rows (69.04kB) to potentially compress (split in 5 groups).
[2022-10-13T16:28:08Z DEBUG sqlite_zstd::transparent] looking at group=c.14371185, has 71626 rows with 0B average size (16.42kB total)
[2022-10-13T16:28:08Z DEBUG sqlite_zstd::transparent] Found existing dictionary id=1 for key=c.14371185
thread '<unnamed>' panicked at 'attempt to divide by zero', src/transparent.rs:804:26
16.42kB across 71626 rows is an average of 0.2 bytes per row, which seems to be getting rounded to 0.
Of course it's pretty silly to even attempt this, but I was attempting to compress every column in a table, one of which happened to be very sparse, and got this error:
16.42kB across 71626 rows is an average of 0.2 bytes per row, which seems to be getting rounded to 0.