Closed jberryman closed 2 months ago
The x?
is the scale factor of the raw score on the graph. So, for 100kb (x100)
, the number you seen on the graph was multiplied by 100 and the original score was 100 times less.
This scaling is done here: https://github.com/fabienrenaud/java-json-benchmark/blob/master/output/toCsv.py#L68 and the raw benchmark data is here: https://github.com/fabienrenaud/java-json-benchmark/blob/master/archive/raw-results-2024-01-30.md
The reasoning behind these factors is that, all things being equal, a 10kb payload should take 10x the time for processing compared to a 1kb payload. So we can expect the 10kb benchmark scores to be 10x smaller than the 1kb score (and they often are in that ball park). For the 10kb and 1kb scores to be comparable/readable on the same scale, I rescale the 10kb by multiplying their scores by 10... I could have gone the other way around and divide/align scores with the 1MB payload scores but chose to use the higher-value/1kb scores as reference.
ah makes sense, thank you
You have:
but presumably this should be "1kb x1000, 10kb x100..." etc?
Also if that's correct, am I to understand that e.g.
fastjson
is deserializing at >1TB/s ? I don't think that's a reasonable number, so something (maybe my understanding) is off