grpc / grpc-web

gRPC for Web Clients
https://grpc.io
Apache License 2.0
8.51k stars 766 forks source link

Envoy server always returns 404 for gRPC-web proxy #1250

Closed Alfons0329 closed 7 months ago

Alfons0329 commented 2 years ago

Hello, I just modified the "hello world" demo to support my project. My goal is to forward the incoming grpc-web call and proxy it to the native grpc format to the backend server. The grpc call works fine for native grpc from bloomRPC client, yet for the grpc-web, it always failed with

{
  "error": "full url: http://10.17.211.86:9901/catalog.CatalogHandler/GetFile, code: 2, err: Unknown Content-type received."
}

And the log for envoy proxy itself is always

[2022-05-31T10:06:41.870Z] "POST /catalog.CatalogHandler/GetFile HTTP/1.1" 404 - 16 1504 0 - "10.255.255.149" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_16_0) AppleWebKit/537.36 (KHTML, like Gecko) bloom-rpc-client/1.5.3 Chrome/78.0.3904.130 Electron/7.1.11 Safari/537.36" "-" "10.17.211.86:9901" "-"

Here is the YAML for config, not sure where I missed My grpc server is running at port 50051

admin:
  access_log_path: /tmp/admin_access.log
  address:
    socket_address: { address: 0.0.0.0, port_value: 9901 }

static_resources:
  listeners:
    - name: listener_0
      address:
        socket_address: { address: 0.0.0.0, port_value: 9902 }
      filter_chains:
        - filters:
          - name: envoy.filters.network.http_connection_manager
            typed_config:
              "@type": type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager
              codec_type: auto
              stat_prefix: ingress_http
              route_config:
                name: local_route
                virtual_hosts:
                  - name: local_service
                    domains: ["*"]
                    routes:
                      - match: { prefix: "/" }
                        route:
                          cluster: grpc_backend
                          timeout: 0s
                          max_stream_duration:
                            grpc_timeout_header_max: 0s
                    cors:
                      allow_origin_string_match:
                        - prefix: "*"
                      allow_methods: GET, PUT, DELETE, POST, OPTIONS
                      allow_headers: keep-alive,user-agent,cache-control,content-type,content-transfer-encoding,custom-header-1,x-accept-content-transfer-encoding,x-accept-response-streaming,x-user-agent,x-grpc-web,grpc-timeout
                      max_age: "1728000"
                      expose_headers: custom-header-1,grpc-status,grpc-message
              http_filters:
                - name: envoy.filters.http.grpc_web
                  typed_config:
                    "@type": type.googleapis.com/envoy.extensions.filters.http.grpc_web.v3.GrpcWeb
                - name: envoy.filters.http.cors
                  typed_config:
                    "@type": type.googleapis.com/envoy.extensions.filters.http.cors.v3.Cors
                - name: envoy.filters.http.router
                  typed_config:
                    "@type": type.googleapis.com/envoy.extensions.filters.http.router.v3.Router
  clusters:
    - name: grpc_backend
      connect_timeout: 0.25s
      type: logical_dns
      http2_protocol_options: {}
      lb_policy: round_robin
      # win/mac hosts: Use address: host.docker.internal instead of address: localhost in the line below
      load_assignment:
        cluster_name: cluster_0
        endpoints:
          - lb_endpoints:
            - endpoint:
                address:
                  socket_address:
                    address: 0.0.0.0
                    port_value: 50051

And here is the docker-compose file

version: '3'
services:
  envoy:
    image: envoyproxy/envoy-dev:006bbc3614724ead239fcc3a2438b4dd6b9173e6
    ports:
      - 9901:9901
      - 9902:9902
    volumes:
      - ./envoy.yaml:/etc/envoy/envoy.yaml

Thanks for the help

sampajano commented 2 years ago

Thanks for the question..

Would you mind taking a look at your setup and compare it with our echo server demo setup and see how they differ? https://github.com/grpc/grpc-web/tree/master/net/grpc/gateway/examples/echo

thanks :)

Alfons0329 commented 2 years ago

@sampajano Thank you so much, I just found that my grpc backend server has TLS solution, so I added the following

transport_socket:
        name: envoy.transport_sockets.tls
        typed_config:
          "@type": type.googleapis.com/envoy.extensions.transport_sockets.tls.v3.UpstreamTlsContext

And now it worked.

By the way, is there a reference for the yaml config that I can forward my metadata such as cookie in gRPC-web header to the backend server? Every time I use bloomRPC with metadata such as {"cookie", "123123"} it does not show in the log of envoy.

[2022-06-01 09:53:37.311][18][debug][router] [source/common/router/router.cc:670] [C0][S1113640055223748911] router decoding headers:
':authority', '10.17.211.86:9902'
':path', '/catalog.CatalogHandler/GetBuildNumber'
':method', 'POST'
':scheme', 'http'
'accept', 'application/grpc-web-text'
'x-user-agent', 'grpc-web-javascript/0.1'
'origin', 'file://'
'x-grpc-web', '1'
'user-agent', 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_16_0) AppleWebKit/537.36 (KHTML, like Gecko) bloom-rpc-client/1.5.3 Chrome/78.0.3904.130 Electron/7.1.11 Safari/537.36'
'content-type', 'application/grpc'
'accept-encoding', 'gzip, deflate'
'accept-language', 'en-US'
'x-forwarded-proto', 'http'
'x-request-id', '06a1f012-36fd-43cb-ac05-b9a2c81fae5e'
'te', 'trailers'
'grpc-accept-encoding', 'identity

Thank you

sampajano commented 2 years ago

Glad it's working now!


RE cookie:

gRPC metadata are not encoded as HTTP headers.. so i was not surprised to see that "cookie" didn't work as intended..

However, I'd assume that any cookies on your browser (matching the domain) should be available on the gRPC server as a HTTP header (but i'm not sure what's the API to access it depending on which server you use).

sampajano commented 7 months ago

Closing for now. Feel free to reopen if you still have questions :)