kumahq / kuma

🐻 The multi-zone service mesh for containers, Kubernetes and VMs. Built with Envoy. CNCF Sandbox Project.
https://kuma.io/install
Apache License 2.0
3.61k stars 332 forks source link

provide an example how to run Kuma (universal mode) and demo app in Docker Compose #367

Closed yskopets closed 2 years ago

yskopets commented 4 years ago

Summary

yskopets commented 4 years ago

Apparently, it's possible to achieve sidecar-like behaviour in Docker Compose.

Here is a proof-of-concept example (tested on Docker for Mac):

docker-compose.yaml

version: "2"

services:
  app:
    image: busybox
    command: top
  sidecar:
    image: busybox
    command: top
    # the following setting instructs Docker Compose to add `sidecar` container to the network namespace of `app` container
    network_mode: "service:app"

Demo script:

# start 1 instance of `app` and `sidecar` and then scale up to 2
# WARN: when I tried to start 2 instances of each from the beginning, networking configuration was wrong
docker-compose up --detach
docker-compose up --detach --scale app=2 --scale sidecar=2
docker-compose ps

# start HTTP server in each pair of `app + sidecar`
docker-compose exec --detach --index=1 app nc -lk -p 1234 -e echo -e "HTTP/1.1 200 OK\r\nContent-Length: 0\r\n\r\n"
docker-compose exec --detach --index=2 sidecar nc -lk -p 2345 -e echo -e "HTTP/1.1 404 OK\r\nContent-Length: 0\r\n\r\n"

# verify that each `app + sidecar` pair shares its own networking namespace 
docker-compose exec --index=1 app netstat -tlpn
docker-compose exec --index=1 sidecar netstat -tlpn
docker-compose exec --index=2 app netstat -tlpn
docker-compose exec --index=2 sidecar netstat -tlpn

# do sample requests: 1) from inside `app + sidecar` pair 2) from another `app + sidecar` pair
docker-compose exec --index=1 sidecar wget -SO- http://localhost:1234
docker-compose exec --index=2 sidecar wget -SO- http://localhost:1234
docker-compose exec --index=1 sidecar wget -SO- http://localhost:2345
docker-compose exec --index=2 sidecar wget -SO- http://localhost:2345

# verify that both containers inside `app + sidecar` pair have the same `eth0` config
docker-compose exec --index=1 app ifconfig eth0
docker-compose exec --index=2 app ifconfig eth0
docker-compose exec --index=1 sidecar ifconfig eth0
docker-compose exec --index=2 sidecar ifconfig eth0

# cleanup
docker-compose down
rucciva commented 4 years ago

hi @yskopets , in this docker-compose mode, if we would like to expose the dataplane to outside of the docker network, how would the configuration be?

is this correct? (for example if app listen to 80 and the host's IP is 192.168.1.1 )

version: "2"

services:
  app:
    image: busybox
    command: top
    ports:
        - 192.168.1.1:1080:1080
  sidecar:
    image: kong-docker-kuma-docker.bintray.io/kuma-dp:0.2.2
    command: run
    network_mode: "service:app"
    environment: 
        # kuma_dp required environments 
---
type: Dataplane
mesh: default
name: app-1
networking:
  inbound:
  - interface: 192.168.1.1:1080:80
    tags:
      service: app

i'm a bit confused about how the other dataplane will find the IP to connect to this dataplane

yskopets commented 4 years ago

@rucciva Hey!

in this docker-compose mode, if we would like to expose the dataplane to outside of the docker network, how would the configuration be?

I'm using Docker Compose on my Mac.

In my case, the following 2 configurations are equivalent:

services:
  app:
    image: busybox
    command: top
    ports:
    - "1080:80"

and

services:
  app:
    image: busybox
    command: top
    ports:
    - "0.0.0.0:1080:80" # notice "0.0.0.0:"

In both cases Docker Compose makes the app available on 0.0.0.0:1080 of my host machine.

Given that host IP is 192.168.1.1, the app (deployed inside Docker Compose) can be accessed from outside as 192.168.1.1:1080

i'm a bit confused about how the other dataplane will find the IP to connect to this dataplane

In essence, Control Plane connects clients to servers based on service tag on inbound and outbound connections in the Dataplane resource.

Every dataplane must announce its own IP address to the Control Plane as part of Dataplane resources, e.g.

type: Dataplane
mesh: default
name: app-1
networking:
  inbound:
  - interface: 192.168.0.5:1080:80 # dataplane's IP address is 192.168.0.5
    tags:
      service: app

Then, when configuring a dataplane for the client, you need to define outbound connections and refer to dependent services by their service tag, e.g.:

type: Dataplane
mesh: default
name: client-1
networking:
  inbound:
  - interface: 192.168.0.6:3000:3000 # dataplane's IP address is 192.168.0.6
    tags:
      service: client
  outbound:
  - interface: :4000
    service: app # reference to the `app` service

After that, when your client application needs to call the app, it should make requests to 127.0.0.1:4000.

client-1 dataplane will be configured by the Control Plane to balance requests received on 127.0.0.1:4000 to 192.168.0.5:1080


By the way, we've prepared an example Docker Compose setup in #383.

It's still undergoing code review, but it's already functional on Mac and Linux.


Thanks again for your interest in Kuma !

rucciva commented 4 years ago

I see, so regardless of the ip address given by docker daemon to the container, the ip configured to dataplane should be the one reachable by other dataplane right?

just to be sure, if i have 2 hosts, 192.168.0.2 and 192.168.0.3, and the app listen to port 80, the docker-compose and dataplane would be like this right?

services: app: image: busybox command: serve --port 80 ports:

type: Dataplane
mesh: default
name: app-1
networking:
  inbound:
  - interface: 192.168.0.2:1080:80
    tags:
      service: app

services: client: image: busybox command: curl 127.0.0.1:4000 sidecar: image: kong-docker-kuma-docker.bintray.io/kuma-dp:0.2.2 command: run network_mode: "service:client" environment:

kuma_dp required environments

type: Dataplane mesh: default name: client-1 networking: inbound:

rucciva commented 4 years ago

i'm asking because i've been getting this error with that kind of set up

[2019-10-28 11:43:15.764][15][warning][config] [bazel-out/k8-opt/bin/source/common/config/_virtual_includes/grpc_stream_lib/common/config/grpc_stream.h:87] gRPC config stream closed: 14, upstream connect error or disconnect/reset before headers. reset reason: connection failure
yskopets commented 4 years ago

I see, so regardless of the ip address given by docker daemon to the container, the ip configured to dataplane should be the one reachable by other dataplane right?

Correct.

just to be sure, if i have 2 hosts, 192.168.0.2 and 192.168.0.3, and the app listen to port 80, the docker-compose and dataplane would be like this right?

Looks good.

I think, you need to change ports in docker-compose.yaml on host1, e.g.

ports:
- 1080:80 # notice ":80"

i'm asking because i've been getting this error with that kind of set up

This error indicates that Envoy cannot connect to the xDS server implemented by the Kuma Control Plane.

Check the settings of kuma-cp:

If setting this variable doesn't help, I will need more information to troubleshoot it further:

  1. start kuma-dp with --admin-port 9901 (to enable Admin interface in Envoy)
  2. dump active Envoy configuration with wget -qO- http://localhost:9901/config_dump (need to run inside kuma-dp container)
  3. dump Envoy metrics with wget -qO- http://localhost:9901/stats (need to run inside kuma-dp container)

Hope that helps.

rucciva commented 4 years ago

I think, you need to change ports in docker-compose.yaml on host1, e.g. ports: 1080:80 # notice ":80"

isn't the kuma_dp inbound interface port that need to be mapped to the external port so that it can be reached by other dataplane?

KUMA_BOOTSTRAP_SERVER_PARAMS_XDS_HOST env var must be set to the DNS name (or address) reachable by both dataplanes, e.g. kuma-control-plane.internal

that fixed it. but now, another error appear

[2019-10-28 12:27:51.599][17][warning][config] [source/common/config/grpc_mux_subscription_impl.cc:72] gRPC config for type.googleapis.com/envoy.api.v2.Listener rejected: Error adding/updating listener(s) inbound:192.168.0.3:1080: cannot bind '192.168.0.3:1080': Cannot assign requested address

Doesn't it seem like listen address and advertised address need to be distinguished in mapped network environment like this?

btw, my host ip is 192.168.0.3, and the container that run the app has ip 172.25.0.2 (which also becomes the ip of kuma dp container through service:app network_mode). And i would like to connect from dataplane running in container in host 192.168.0.2

yskopets commented 4 years ago

isn't the kuma_dp inbound interface port that need to be mapped to the external port so that it can be reached by other dataplane?

Sorry, my bad. You're absolutely right. Should be 1080:1080

Doesn't it seem like listen address and advertised address need to be distinguished in mapped network environment like this?

We need to think about distinguishing between "listen" and "advertised" addresses.

In the meantime, I would suggest to try configuring transparent proxying (IP tables redirection):

Eventually, we will provide an example how to use transparent proxying with Docker Compose. If you do it first, please share your solution with us. 😉

rucciva commented 4 years ago

We need to think about distinguishing between "listen" and "advertised" addresses.

Cool. Kafka and redis cluster did just the same to support running inside docker.

Eventually, we will provide an example how to use transparent proxying with Docker Compose. If you do it first, please share your solution with us. wink

I'm not an expert in iptables, so i guess i'm gonna wait for the example. :pray: Thank you very much for the guidances

rucciva commented 4 years ago

Hi @yskopets , can i have another question?

In the proxy template example, envoy would listen on 0.0.0.0 address right? If so, then how would the other dataplanes discover what IP to connect to the corresponding envoy?

yskopets commented 4 years ago

@rucciva Control Plane does not interpret configuration defined in ProxyTemplate.

E.g., a Listener defined in ProxyTemplate will be appended to Envoy configuration "as is". It will not be treated as another inbound interface of a Dataplane.

If your need one application to become a client of another application, it should be expressed in terms of Dataplane resource and its inbound and outbound interfaces.

rucciva commented 4 years ago

I see, thank you very much :+1:

ludov04 commented 4 years ago

I'm would be very interested aswell in a example on how to configure transparent proxying with docker-compose.

I'm interested in setting this up in a ECS environment. ECS provides a ProxyConfiguration that they also use to setup proxying for their own AWS AppMesh solution that also uses Envoy.

I'm wondering how transparent proxying works, basically all outgoing traffic from the app container goes to the sidecar, but then how does the sidecar know what to do with it? Does it look at the hostname? Is it then also able to proxy traffic to the internet if it can't resolve the hostname locally?

jakubdyszkiewicz commented 4 years ago

Hey @ludov04, it's a little bit complicated, but let me try to explain it to you.

Transparent proxying uses iptables to force the traffic to flow through Envoy. Iptables redirect everything that goes out to one port. Envoy then looks at what was the original IP of the destination. It gets an IP, because it was resolved first by the system in the application (for example by HTTP client).

Here is a catch. Normally Envoy exposes real listeners (that allocates a port) for every outbound destination. So if you got backend service that communicates with redis and elasticsearch, the backend Envoy opens two ports (let's say 1000 for redis and 1001 for elasticsearch) on backend machine so you can change the URLs in the app to http://localhost:1000 and http://localhost:1001. With transparent proxy we open virtual listeners that do not open the port, but point to the address of the destination application. So for example if Redis is available at 192.168.0.1:3000 we open a virtual listener at 192.168.0.1:3000.

Back to the flow using example above. Let's say that app (backend) wants to communicate with redis and it resolved from DNS the IP and port to 192.168.0.1:3000. It makes a request to it which is intercepted by Envoy. Envoy sees that original destination (TCP feature) was 192.168.0.1:3000 checks if there are virtual listener with this address and if so it uses it. If there is no such listener (for example you make a request to IP of the google.com) we just passthrough the request, so any communication outside of the mesh is not broken.

One more thing. Transparent proxying for now assumes that if you want to communicate with destination service your DNS always resolves to only one IP. This is a case on Kubernetes (which is the only environment we use transparent proxy at this moment), because of the Service abstraction. We should however remove this restriction with this task https://github.com/Kong/kuma/issues/561

Let me know if you've got any questions.

lahabana commented 2 years ago

Here's an example: https://github.com/kumahq/kuma-tools/tree/master/docker-compose