Closed marten-seemann closed 1 year ago
I added a benchmark test in a branch here: https://github.com/libp2p/go-libp2p/compare/benchmark-identify-protobuf?expand=1. Note that this is benchmarking a proto2.
Not sure if we want to merge this into master, since it's actually just a benchmark of the protobuf library.
Using GoGo protobuf:
goos: darwin
goarch: arm64
pkg: github.com/libp2p/go-libp2p/p2p/protocol/identify
BenchmarkIdentifyProtobufMarshal-10 2770962 404.3 ns/op 896 B/op 1 allocs/op
BenchmarkIdentifyProtobufUnmarshal-10 741806 1473 ns/op 2328 B/op 45 allocs/op
Using Google protobuf:
goos: darwin
goarch: arm64
pkg: github.com/libp2p/go-libp2p/p2p/protocol/identify
BenchmarkIdentifyProtobufMarshal-10 2404834 496.0 ns/op 896 B/op 1 allocs/op
BenchmarkIdentifyProtobufUnmarshal-10 643617 1675 ns/op 2312 B/op 45 allocs/op
This confirms that Google protobuf is slower than GoGo (about 20%). I would have expected the difference to be much bigger. I get similar results for proto3.
Given that gogo protobuf is deprecated, and the slow down is small relative to network overhead. I think it makes sense to use google protobuf.
If users need that 20% they can always make sure to use gogo for their own protocols. We don’t send that many protobufs in core. Thoughts?
We're currently using both
gogofast
andgogofaster
. Both of them are deprecated: https://github.com/gogo/protobuf#deprecated-protocol-buffers-for-go-with-gadgetsWe should decide for one compiler to use, to be consistent. We should also run
go generate
/ make on CI, to make sure that our Protobuf code is consistent with the proto definitions.