TPU has complex logic to assign connection capacity to TPU clients; this considers stake vs unstaked connections, per peer/per node limits, QUIC stream limits
staking nodes is a potential security problem; should be minimized
lite-rpc nodes cannot be stateless/scaled with stake assigned
Proposed solution overview:
Architektur overview:
many TPUs (solana validators)
one quic-forward-proxy
many clients
clients connect to proxy and proxy connects to TPU
Proposed solution details:
Implement a QUIC proxy service (independently deployable network service).
QUIC proxy receivers transactions from its clients and forwards them to TPU via QUIC.
dedicated protocol needs to be used between clients and the Quic Proxy.
Service must accept multiple inbound QUIC connections.
Clients of the proxy service might be lite-rpc instances, lite-rpc-library users or other components (e.g. bots).
Proxy Service must use its assigned stake identity to TLS-sign the connection to the TPU target node.
Senders must specify the TPU target node (based on leader schedule known by the client; usually be using lite-rpc-library).
Must allow for transactions.
expect that one instance of the service is running (or at least needs to appear as single IP-Address/peer to the TPU)
the quic-proxy must be powerful enough to take all ingress traffic
No streaming is required. Use quic multiplexing to connect to TPU.
note: there is a similar/alternative approach for client-TPU-Interaction using reverse-proxy model,
"node must advertise to gossip for inbound traffic that is designated for leader"
Problem:
Proposed solution overview:
Architektur overview:
Proposed solution details:
Similar approaches:
note: there is a similar/alternative approach for client-TPU-Interaction using reverse-proxy model, "node must advertise to gossip for inbound traffic that is designated for leader"