As part of the ongoing efforts to improve performance of our persistence layer, the underlying storage engine for storing of chunk data is to be pulled out to a separate component, outlined in the sharky-pkg branch in the bee repo.
sharky handles chunk persistence in shards, allowing for load balancing of IO to the different shards, which in turn allows us to leverage far better performance on insertions and reads from the DB. Writes to leveldb for localstore state transitions become far less expensive.
While looking at the integration path, we've identified 3 keys steps in fully integrating sharky into bee:
- wire in sharky (tell localstore to write and read chunk data from the persistence layer)
- - instead of writing the data on the retrieval data index, we write the sharky.Location
- - add a cli flag to define the upper bound of chunks stored in a shard
- DRP - disaster recovery protection (what happens when you pull the plug?)
- - rebuild free slot list (missing in sharky)
- migration (move the existing data into sharky)
As part of the ongoing efforts to improve performance of our persistence layer, the underlying storage engine for storing of chunk data is to be pulled out to a separate component, outlined in the
sharky-pkg
branch in thebee
repo.sharky
handles chunk persistence in shards, allowing for load balancing of IO to the different shards, which in turn allows us to leverage far better performance on insertions and reads from the DB. Writes to leveldb for localstore state transitions become far less expensive. While looking at the integration path, we've identified 3 keys steps in fully integratingsharky
intobee
: