Closed flying-pan closed 1 year ago
Hi, @flying-pan
Thanks for your interest in Fleetbench! As Fleetbench is still under development, its data set is representative of Google’s workload, and we will update it regularly. Unfortunately, HyperProtoBench is no longer updated or maintained (link), and it is not representative any more. Please use Fleetbench as the single source of truth for protobuf. Also, currently we don’t have any plans to micro-benchmark individual functions.
I have been using hyperprotobench and its dataset (as described in the Mico’21 paper) for my work. As I looked at Fleetbench, from the protobuf aspect, it’s very nice to see the benchmark methodology where it shuffling messages in the working sets to prevent CPU prefetcher, which is the main delta. One big delta between hyperprotobench and fleetbench is the dataset – the max string/byte size is 1KB. Are the data set in hyperprotobench “still” representative of data center messages? Or the fleetbench is still being developing? Benchmark is hard and produce repeated result is even harder. In my test, I have seen 25+% run to run variations (p-state off, c-state off, etc.) In addition to Lifecycle.Run(), is there a plan to have a proper way to micro-benchmark induvial functions – Create, Serialize, Deserialize, Reflect, etc.?