Open klutchell opened 11 months ago
A better solution would be a bridge network for each VM that listens on all ports.
Perhaps we connect to the existing docker bridge?
Looking at the current implementation, the best option IMO, is to add a small haproxy
load balancer to remote-workers-*
compositions, which will map microVM ip:ports to the host's public interface (e.g. statically):
default
mode tcp
log stdout format raw daemon
option tcplog
timeout client 5s
timeout connect 3600s # increase if required
timeout server 3600s # increase if required
listen tcp-2376
bind :2376
server vm1 10.152.66.2:2376
...
listen tcp-2377
bind :2377
...
This works because the tun-tap interfaces are available on the hostOS already, so a haproxy service running with host networking, will be able to proxy to these. We'll just need to set well known subnets, since random won't work with static haproxy config (unless we pipe it through envsubst
).
.. outside of the MVP scope, for multiple VMs, we would expand to multiple frontend listeners (e.g. tcp-23761
, tcp-23762
, etc.).
.. for dynamic service discovery, we can implement a solution with haproxy + firecracker API. Fir example, running a supervisor process polling local firecracker APIs and updating HAProxy backends using its socket (mapped into each jailer container).
@ab77 how is docker able to forward traffic from a public interface to an address on the docker bridge using only iptables and routes? I was hoping we could solve it in a similar way, even if we hardcode a bunch of it initially.
@ab77 how is docker able to forward traffic from a public interface to an address on the docker bridge using only iptables and routes? I was hoping we could solve it in a similar way, even if we hardcode a bunch of it initially.
Using a tcp proxy. Using iptables dynamically is much more complicated, messy and we should avoid it.
I ask because we already add and remove rules as part of the TAP device creation so it seemed like a logical place to add some additional port rules and remain part of the overlay vs a separate sidecar app.
I ask because we already add and remove rules as part of the TAP device creation so it seemed like a logical place to add some additional port rules and remain part of the overlay vs a separate sidecar app.
Let's create a builder deployment first with both containerised and VM builders (lets start with amd64), deploy to staging and figure out the approach there.
I have another idea for this that solves multiple related problems. We can discuss when I’m back, so don’t spend too much time on it for now.
—
Kyle Harding Embedded Linux Engineer, balena.io
On Thu, Feb 22, 2024 at 5:54 PM Anton Belodedenko @.***> wrote:
WIP: balena-io/remote-workers#180 https://github.com/balena-io/remote-workers/pull/180
— Reply to this email directly, view it on GitHub https://github.com/balena-io-experimental/container-jail/issues/40#issuecomment-1960469816, or unsubscribe https://github.com/notifications/unsubscribe-auth/AE4CWIFGOXKE3N3PPFS5TYLYU7EBNAVCNFSM6AAAAABANDNOSWVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTSNRQGQ3DSOBRGY . You are receiving this because you authored the thread.Message ID: @.***>
The host container is already running in host networking mode in order to create the TUN/TAP device, we just need to accept some port ranges via (env vars?) and create the required IP tables rules similar to how Docker would do.