phoenix-dataplane / phoenix

Phoenix dataplane system service
https://phoenix-dataplane.github.io
Apache License 2.0
50 stars 9 forks source link

Add more Engine Implementations #238

Open Romero027 opened 1 year ago

Romero027 commented 1 year ago

This issue tracks the list of engines that we plan to implement.

See also: https://www.envoyproxy.io/docs/envoy/latest/configuration/http/http_filters/http_filters

Romero027 commented 1 year ago

cc @crazyboycjr @livingshade @Kristoff-starling

crazyboycjr commented 1 year ago

cc @denx20

livingshade commented 1 year ago

Is there any example about specifying multiple input/output queues? Say I have 3 endpoints: client A and server B, C and I want to randomly choose a server when sending RPC from client. (Assume servers are identical in functionality) I believe we need that sort of feature to implement LB or traffic mirroring or anything related with routing

crazyboycjr commented 1 year ago

There's unfortunately no such example. You need to build stuff around yourself. There's some tips, but we can definitely discuss more.

  1. RpcAdapter can already support multiple connections. What it takes as input is just an EngineTxMessage, which you can specify a conn_id.
  2. I would start from changing the userland API. There're two things I would consider to change (1) previously connect takes a concrete host:port and use DNS to resolve the address. Now I would add one or more APIs such as connect_multiple_addresses(&['a-list-of-addrs']) or connect_with_resolver('service-name', SomeResolver::new()) to let one client connect to multiple addresses at once. (2) We want the load balancing implemented in the backend service, so the userland library won't bother how to choose among multiple conn_ids, so we need a virtual conn_id. The mapping between a VirtualConnId to an actual ConnId can be maintained in a backend Engine (maybe call it Router or LoadBalancer?).
  3. Having all that set up, a request will start from the user app (with vconn_id, call_id), goes to MrpcEngine, to LoadBalancer (maps vconn_id to conn_id), to RpcAdapter/TcpRpcAdapter.

In my understanding, Router matches an RPC service name (potentially with RPC function name or func_id) to a pool of connections (maybe can be called service cluster or simply 'Cluster'). LoadBalancer chooses a concrete connection to forward the request. It is ideal to have both so that mRPC can encode some custom routing rules. But in MessageMeta, there's only service_id and func_id which is auto generated, thus difficult to write rules for them. We can focus on LoadBalancer now and leave Router aside.