Open liamsi opened 5 years ago
Two things are not fair for your amino case.
Before marshalling, you need to reset timer or stop timer before generating testing data.
func BenchmarkAminoMarshal(b *testing.B) {
// b.StopTimer()
data := generateAmino()
b.ReportAllocs()
b.StartTimer()
s := AminoSerializer{amino.NewCodec()}
for i := 0; i < b.N; i++ {
s.MustMarshalBinaryBare(data[rand.Intn(len(data))])
}
}
the type of BirthDay
is time.Time while in other cases it is int64
good point @rickyyangz. Did you you re-benchmark with your suggested changes? I would assume the performance to still be much slower than generated protobuf.
While looking into adding amino to this list: https://github.com/alecthomas/go_serialization_benchmarks (see this branch: https://github.com/Liamsi/go_serialization_benchmarks/tree/add_amino), it seems like amino is quite slow compared to other similar libraries:
Hopefully, we should be able to vastly improve this performance without completely reworking the structure.
Here is what the profiler tells us about memory/cpu while running above benchmarks for unmarshaling/marshaling:
Zaki suggested to add a hint in the readme about performance issue and in which cases users would might want to refrain from using amino.