envoyproxy / envoy

Cloud-native high-performance edge/middle/service proxy
https://www.envoyproxy.io
Apache License 2.0
24.27k stars 4.69k forks source link

Per-locality tunneling configurations #34289

Closed tonya11en closed 12 hours ago

tonya11en commented 1 month ago

Per-locality tunneling configurations

There is currently no way to represent an Endpoint that is only reachable via tunnel (CONNECT, GENEVE, Wireguard, etc.). I'd like to propose a new configurable parameter that will define "network reachability" for LocalityLbEndpoints. This proposal will avoid discussing any implementation details. The primary goal is to reach an agreement with the community on the concept and the configuration surface.

Motivation

Consider two "non-peered" networks. One network contains a client, the other network contains some endpoints that the client needs to reach, and the only way for the client to reach those endpoints is via an HTTP CONNECT proxy.

Screenshot 2024-05-21 at 3 13 32 PM

If the client and endpoints share a common xDS server/authority, there is no way to represent the endpoints in such a way that the client knows they are reachable via the tunnel.

Proposal

I want to pitch a new parameter to be added to LocalityLbEndpoints that will convey any "network reachability" details for endpoints in the locality. I intend for this new parameter to be easily extended to support different methods of network traversal.

Since [H1 CONNECT origination already exists], we can start with something like:

message NetworkReachabilityOptions {
  // Access through a tunnel.
  message TunnelOptions {
    // HTTP CONNECT tunneling.
    message HttpConnectOptions {
      // Address of the CONNECT proxy.
      config.core.v3.Address proxy_address = 1;

      // The HTTP version to use when establishing the CONNECT tunnel.
      enum HttpVersion { H1 = 0; }
      HttpVersion version = 1;
    }

    oneof tunnel_opt {
      HttpConnectOptions http_connect = 1;
    }
  }

  oneof reachability_opt {
    // Network is directly reachable. Default.
    google.protobuf.Empty direct = 1;

    // Traverse a tunnel to reach the network.
    TunnelOptions tunnel = 2;
  }
}

By default, the endpoints in the locality are directly reachable and require no hoop-jumping to reach. This means that the address contained within the endpoints will be communicated with directly, just as we do in Envoy today.

Configuring the reachability options with TunnelOptions indicates that all of the constituent endpoints must be accessed via tunnel. The IP addresses remain the same; however, the connection would be established via HTTP CONNECT.


I'd like some input from folks (especially maintainers). Does this make sense? Would it be better another way?

If this sounds sane, I'll follow up with how I think we should approach the implementation. Thanks!

tonya11en commented 1 month ago

/assign tonya11en

kyessenov commented 1 month ago

Where would the configuration for the tunnel go? Is this materially different from an internal listener as a tunnel client in the endpoint? I'm referring to the protocol level settings (e.g. telemetry filters, protocol options for versions >=2).

tonya11en commented 1 month ago

The tunnel config would go in the NetworkReachabilityOptions above that would go in LocalityLbEndpoints. It would apply to the endpoints it contains.

Can you link to what settings you refer to?

kyessenov commented 1 month ago

The way it's done in Istio is as follows:

name: internal_outbound
load_assignment:
  cluster_name: internal_outbound
  endpoints:
  - lb_endpoints:
    - endpoint:
        address:
          envoy_internal_address:
            server_listener_name: internal_outbound
      metadata:
        filter_metadata:
          envoy.filters.listener.original_dst:
            local: 192.168.1.2
transport_socket:
  name: envoy.transport_sockets.internal_upstream
  typed_config:
    "@type": type.googleapis.com/envoy.extensions.transport_sockets.internal_upstream.v3.InternalUpstreamTransport
    passthrough_metadata:
    - name: envoy.filters.listener.original_dst
      kind: { host: {}}
    transport_socket:
      name: envoy.transport_sockets.raw_buffer
      typed_config:
        "@type": type.googleapis.com/envoy.extensions.transport_sockets.raw_buffer.v3.RawBuffer

An internal listener listens on internal_outbound and can execute the tunneling protocol (e.g. MASQUE or bare bones H2 CONNECT). The well known dynamic metadata programs the "original destination" in https://www.envoyproxy.io/docs/envoy/latest/configuration/listeners/listener_filters/original_dst_filter#internal-listeners.

I agree that it would be nice to swap the actual address instead of the internal address, but that's mostly an aesthetic choice and xDS is not meant to be human readable.

tonya11en commented 1 month ago

Alright, I see. I think everything I'd need for this is in the TCP proxy filter's TunnelingConfig parameter.

Thanks @kyessenov! Closing this issue out.

tonya11en commented 1 month ago

I need to reopen this. While @kyessenov's approach works for Envoy without any changes, it doesn't work for other xDS-programmable dataplanes like gRPC. gRPC doesn't have a TCP proxy filter, since it's not a proxy :(.

tonya11en commented 1 month ago

CC @markdroth for input. Is this feasible for gRPC?

kyessenov commented 1 month ago

@tonya11en @markdroth gRPC has to have a notion of a tunneling listener to support tunneling, no? The internal address represents a client endpoint for that listener, and is sufficiently abstract since it is just a symbol.

github-actions[bot] commented 1 week ago

This issue has been automatically marked as stale because it has not had activity in the last 30 days. It will be closed in the next 7 days unless it is tagged "help wanted" or "no stalebot" or other activity occurs. Thank you for your contributions.

github-actions[bot] commented 12 hours ago

This issue has been automatically closed because it has not had activity in the last 37 days. If this issue is still valid, please ping a maintainer and ask them to label it as "help wanted" or "no stalebot". Thank you for your contributions.