fabienrenaud / java-json-benchmark

Performance testing of serialization and deserialization of Java JSON libraries
MIT License
979 stars 135 forks source link

QuickBuffers 1.4 #93

Closed ennerf closed 9 months ago

ennerf commented 10 months ago

I maintain a 3rd party Java Protobuf library (QuickBuffers) that happens to also be pretty decent at serializing JSON. It's not exactly "DATABIND" as it uses pojos generated from a proto schema, but it's comparable to libraries with compile time annotation processing. Running .\run ser --apis databind --libs "quickbuf_json,fastjson" on my local machine I get

Benchmark                     Mode  Cnt        Score        Error  Units
Serialization.fastjson       thrpt   20  7875213,787 ± 759316,235  ops/s
Serialization.quickbuf_json  thrpt   20  9516026,221 ±  51529,144  ops/s

I haven't had much of a use case for fast json reading, so deserialization is comparatively slow and not zero-allocation:

Benchmark                       Mode  Cnt        Score       Error  Units
Deserialization.fastjson       thrpt   20  5815839,275 ± 15305,744  ops/s
Deserialization.quickbuf_json  thrpt   20  2245685,532 ± 50916,730  ops/s

Client types without an equivalent in the protobuf schema were set to string, so the serialization doesn't include the overhead for e.g. converting BigDecimal and LocalDate to string. That shouldn't make a significant difference overall, but I can take out the Client benchmark if you think that's not fair.

fabienrenaud commented 10 months ago

Hi ennerf, thank you for your contribution. As long as your project serializes and deserializes JSON, it's welcome. Protobuf schemas as pseudo databound POJOs sounds fine to me.

It sounds like your library doesn't support handling of native Java types likes BigDecimal, LocaleDate etc. which is what the Client benchmark is built for testing, so please remove your lib from that specific benchmark. Few libs are integrated with it anyway since most don't support these specific native types. User benchmark is 👍

I see you are still making some changes, so please let me know when everything is ready. When you're done, please also post an update on your serialization/deserialization performance.

ennerf commented 10 months ago

Sounds good. I disabled the Clients benchmark and am not planning on any more changes.

I could also add protobuf binary serialization if you feel like that'd be interesting for comparison 🤷‍♂️

I get a lot of variance in the fastjson serialization runs, so my results are mixed and fall somewhere between 20-40% difference. It's better to be conservative, so here are some numbers that are ~20% faster on serialization and ~40% slower on deserialization. Your test environment is probably more stable.

Benchmark                       Mode  Cnt        Score        Error  Units
Serialization.fastjson         thrpt   20  7426711,352 ± 935350,207  ops/s
Serialization.quickbuf_json    thrpt   20  9097783,910 ± 153276,464  ops/s

Benchmark                       Mode  Cnt        Score       Error  Units
Deserialization.fastjson       thrpt   20  5466935,136 ± 73860,953  ops/s
Deserialization.quickbuf_json  thrpt   20  2253460,321 ± 74935,452  ops/s