Open Felixoid opened 1 month ago
DoubleDelta
takes the second differential for the value.
E.g., when we have a column with values 60, 120, 240, 300, ....
, the values stored in the file will be 60, 60, 0, 0...
, then compressed. It's the best codec for monotonic values.
I still haven't tested the performance of this changes. Is a bit stuck on https://github.com/go-graphite/graphite-clickhouse-tldr/pull/6 to have the tags and all other stuff in the DB
Thanks to inform this, I was going to apply this to PROD, but will wait. It is doing fine on a small DEV server:
┌─table─────────────────┬─column────┬─type────┬──────rows─┬─disk───────┬─avg_size─┬─compressed─┬─uncompressed─┬─────compress_ratio─┐
1. │ default.data_backup │ Path │ String │ 133438903 │ 56.81 MiB │ 0.45 B │ 56.78 MiB │ 12.43 GiB │ 224.1160238907609 │
2. │ default.data_backup │ Value │ Float64 │ 133438903 │ 86.55 MiB │ 0.68 B │ 86.51 MiB │ 1018.06 MiB │ 11.767609582355778 │
3. │ default.data_backup │ Time │ UInt32 │ 133438903 │ 231.96 MiB │ 1.82 B │ 231.92 MiB │ 509.03 MiB │ 2.194811239521079 │
4. │ default.data_backup │ Date │ Date │ 133438903 │ 2.32 MiB │ 0.02 B │ 2.29 MiB │ 254.51 MiB │ 111.14184778172574 │
5. │ default.data_backup │ Timestamp │ UInt32 │ 133438903 │ 24.22 KiB │ 0.00 B │ 2.90 KiB │ 145.41 KiB │ 50.21787521079258 │
6. │ default.graphite_data │ Path │ String │ 148363663 │ 7.51 MiB │ 0.05 B │ 7.47 MiB │ 13.70 GiB │ 1877.6107315995705 │
7. │ default.graphite_data │ Value │ Float64 │ 148363663 │ 61.22 MiB │ 0.43 B │ 61.19 MiB │ 1.11 GiB │ 18.498867365596805 │
8. │ default.graphite_data │ Time │ UInt32 │ 148363663 │ 15.66 MiB │ 0.11 B │ 15.63 MiB │ 565.95 MiB │ 36.213080019697834 │
9. │ default.graphite_data │ Date │ Date │ 148363663 │ 752.65 KiB │ 0.01 B │ 725.17 KiB │ 282.98 MiB │ 399.58597313402686 │
10. │ default.graphite_data │ Timestamp │ UInt32 │ 129609900 │ 20.64 KiB │ 0.00 B │ 2.04 KiB │ 141.25 KiB │ 69.27155172413794 │
└───────────────────────┴───────────┴─────────┴───────────┴────────────┴──────────┴────────────┴──────────────┴────────────────────┘
I need as well to test all the tables for performance before merging it