Closed mbaglole closed 3 years ago
cc @junr03 @rojkov
I could reproduce the problem with the latest grpcurl and Envoy. It seems grpcurl ignores the response header content-encoding: gzip
and tries to interpret the compressed stream as a normal Protobuf message.
After I had grpc-encoding: gzip
added to the response grpcurl started to fail with
$ grpcurl -H "accept-encoding: gzip" --plaintext -proto yages-schema.proto -import-path /home/rojkov/work/yages localhost:10000 yages.Echo/Ping
ERROR:
Code: Unimplemented
Message: grpc: Decompressor is not installed for grpc-encoding "gzip"
Looks like the gRPC clients have to support compression too.
@rojkov may have been a slight misunderstanding here.
I did further digging, and it's due to http compression on the response not being decompressed before the grpc client. If we apply http decompresion on the response and then have the request sent to the grpc client, it works as expected.
Title: Response compression for application/grpc results in RESOURCE_EXHAUSTED v1.18.3
Description:
Calling grpc service (content-type) application/grpc with response compression (gzip or brotli) enabled, results in 100% failure from the client. The following message is thrown:
error="rpc error: code = ResourceExhausted desc = grpc: received message larger than max (276824064 vs. 4194304)"
I see the grpc server receive the request, a 200 OK is sent, includnig the data and relevant grpc trailers. However, I do not understand what results in the ResourceExhausted for the grpc client (grpcurl, golang, python)
Repro steps:
response_direction enabled
on the listener, including content-typeapplication/grpc
Admin and Stats Output:
Config:
Logs:
Call Stack:
NOTE: I will be OOO next week, I may have a delayed response.