has improved a lot by avoiding make new bytes while reading and writing
Batch by using Pool #235
DONE
Index Usage(using IRadix right now, maybe some other data structures?)
In conclusion, I have check and optimize all these memory cost code except Index part.
Because the IRadix will take too much memory after my tests, so we can use more memory effecient data structures like BTree or Adaptive Radix Tree, we can do it if there are some users give us feedback.
And I am also exploring some on-disk indexes, like hashtable, lotusdb will also have more choices of index if we achieve this.
IRadix
right now, maybe some other data structures?)In conclusion, I have check and optimize all these memory cost code except Index part.
Because the IRadix will take too much memory after my tests, so we can use more memory effecient data structures like BTree or Adaptive Radix Tree, we can do it if there are some users give us feedback.
And I am also exploring some on-disk indexes, like hashtable, lotusdb will also have more choices of index if we achieve this.