Open cathaypacific8747 opened 4 months ago
Very interesting, thanks a lot for sharing!
The official documentation has mentioned in the performance best practices section that unary RPCs in Python have higher performance than streaming RPCs because there are extra threads https://grpc.io/docs/guides/performance/#python
Hi,
Thanks for the benchmark! I noticed that the benchmark uses
timeit
, and each iteration repeatedly creates a new gRPC channel:https://github.com/llucax/python-grpc-benchmark/blob/94a175fbcbb7d144506b5f5099c8b87e9f21c658/benchmark#L21
https://github.com/llucax/python-grpc-benchmark/blob/94a175fbcbb7d144506b5f5099c8b87e9f21c658/grpcio/client.py#L9-L12
However, more realistic scenarios would involve creating a single channel and reusing it for multiple RPC calls. I've forked the repo and and re-ran the benchmarks for multiple payload sizes:
grpc.aio
achieved 1.5-2x throughput vs.grpclib
grpc.aio
achieved 1.3x-0.9x throughput vs.grpclib
It seems like
grpc.aio
(blue) actually shows significantly better performance compared togrpclib
(orange) for single calls, but slightly poorer performance for large streaming requests.Test conditions: i5-13600K, 32GB RAM, Python 3.11, 1000 loops