Closed yakovdom closed 2 years ago
Thanks for question :)
Curious if you are using the Envoy proxy as documented here? https://github.com/grpc/grpc-web#2-run-the-server-and-proxy
yes, I use envoy
admin:
access_log_path: /tmp/admin_access.log
address:
socket_address: { address: 0.0.0.0, port_value: 9901 }
static_resources:
listeners:
- name: listener_0
address:
socket_address: { address: 0.0.0.0, port_value: 8921 }
filter_chains:
- filters:
- name: envoy.filters.network.http_connection_manager
typed_config:
"@type": type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager
codec_type: auto
stat_prefix: ingress_http
route_config:
name: local_route
virtual_hosts:
- name: local_service
domains: ["*"]
routes:
- match: { prefix: "/" }
route:
cluster: greeter_service
timeout: 0s
max_stream_duration:
grpc_timeout_header_max: 0s
cors:
allow_origin_string_match:
- prefix: "*"
allow_methods: GET, PUT, DELETE, POST, OPTIONS
allow_headers: keep-alive,user-agent,cache-control,content-type,content-transfer-encoding,custom-header-1,x-accept-content-transfer-encoding,x-accept-response-streaming,x-user-agent,x-grpc-web,grpc-timeout
max_age: "1728000"
expose_headers: custom-header-1,grpc-status,grpc-message
http_filters:
- name: envoy.filters.http.grpc_web
typed_config:
"@type": type.googleapis.com/envoy.extensions.filters.http.grpc_web.v3.GrpcWeb
- name: envoy.filters.http.cors
typed_config:
"@type": type.googleapis.com/envoy.extensions.filters.http.cors.v3.Cors
- name: envoy.filters.http.router
typed_config:
"@type": type.googleapis.com/envoy.extensions.filters.http.router.v3.Router
clusters:
- name: greeter_service
connect_timeout: 10.25s
type: logical_dns
http2_protocol_options: {}
lb_policy: round_robin
load_assignment:
cluster_name: cluster_0
endpoints:
- lb_endpoints:
- endpoint:
address:
socket_address:
address: 0.0.0.0
port_value: 50057
Ah thanks for the info :)
While i don't know what exactly is not working, i suggest one thing you can do is compare with the echo example (which has streaming mode), and see what client/server responses are different: https://github.com/grpc/grpc-web/tree/master/net/grpc/gateway/examples/echo
hope that helps :)
Thanks for your reply!
I am afraid, that echo example is not using bytes
field and the problem is only with this type of proto fields.
Proto messages without fields of bytes
type works just fine
And bytes
fields works fine in grpcweb mode
Maybe I need to make some changes to client or server, if I want use bytes
field in grpcwebtext mode?
I've read about some encryption in grpcwebtext, maybe this is the key?
Ahh i see! Thanks for the info!
Good to know that the issue is ONLY with bytes
fields..
It could be bug in proto decoding or grpc-web stack.
Maybe I need to make some changes to client or server, if I want use
bytes
field in grpcwebtext mode? I've read about some encryption in grpcwebtext, maybe this is the key?
I don't think we specifically "encrypt" the request / response.
Maybe you're referring to this thread? I guess the technique mentioned there could help you check if the response is valid — to narrow down whether it's a server or client issue.
Thanks for your help! Somehow, after some iterations of development I can't reproduce this issue any more. The sad thing is that I haven't notice, what have I change to fix it:( But now all works just fine! So thank you very much!
Oh ok.. 😅 In any case, glad it works for you now! 😃
Hello!
I am trying to create client for my python grpc server.
My server implements server-side streaming methods and unary calls. As follows from library documentation, if I need streaming, i should compile my proto files with mode=grpcwebtext option
But, I also have following unary call method with bytes field in request:
Client code is:
if I compile my protofiles with option mode=grpcweb this rpc method works just fine (but other server-streaming methods don't)
But if I compile my protofiles with option mode=grpcwebtext and trying to use this method, i got following exception:
Please, if you know, what am I doing wrong, let me know Thanks for your help!