However, more realistic scenarios would involve creating a single channel and reusing it for multiple RPC calls. I've forked the repo and and re-ran the benchmarks for multiple payload sizes:
unary-unary: grpc.aio achieved 1.5-2x throughput vs. grpclib
unary-streaming: grpc.aio achieved 1.3x-0.9x throughput vs. grpclib
It seems like grpc.aio (blue) actually shows significantly better performance compared to grpclib (orange) for single calls, but slightly poorer performance for large streaming requests.
Test conditions: i5-13600K, 32GB RAM, Python 3.11, 1000 loops
Hi,
Thanks for the benchmark! I noticed that the benchmark uses
timeit
, and each iteration repeatedly creates a new gRPC channel:https://github.com/llucax/python-grpc-benchmark/blob/94a175fbcbb7d144506b5f5099c8b87e9f21c658/benchmark#L21
https://github.com/llucax/python-grpc-benchmark/blob/94a175fbcbb7d144506b5f5099c8b87e9f21c658/grpcio/client.py#L9-L12
However, more realistic scenarios would involve creating a single channel and reusing it for multiple RPC calls. I've forked the repo and and re-ran the benchmarks for multiple payload sizes:
grpc.aio
achieved 1.5-2x throughput vs.grpclib
grpc.aio
achieved 1.3x-0.9x throughput vs.grpclib
It seems like
grpc.aio
(blue) actually shows significantly better performance compared togrpclib
(orange) for single calls, but slightly poorer performance for large streaming requests.Test conditions: i5-13600K, 32GB RAM, Python 3.11, 1000 loops