kasmtech / workspaces-issues

18 stars 4 forks source link

Agent server kasm_proxy exposes HTTPS on all interfaces #45

Open Ian-Kasmweb opened 3 years ago

Ian-Kasmweb commented 3 years ago

Original report by Lev Elupirl (Bitbucket: [Lev Elupirl](https://bitbucket.org/Lev Elupirl), ).


On the Agent server, the /opt/kasm/1.9.0/docker/docker-compose.yaml file contains the following host to container port mapping:

  proxy:
    container_name: kasm_proxy
    image: "kasmweb/nginx:latest"
    ports:
      - "443:443

In my testing, the kasm_proxy can be reached by the end-user sessions. If I understand the Kasm architecture correctly, these Docker containers shouldn’t need direct HTTPS access to the kasm_proxy. The current configuration exposes HTTPS to all interfaces on the Agent server which is a security risk due to the increased attack surface, as shown below.

This is a session running the Terminal image with IP address 172.18.0.4 assigned to the kasm_default_network Docker network:

default:~$ ip a ls eth0 | grep inet
    inet 172.18.0.4/16 brd 172.18.255.255 scope global eth0

default:~$ curl --insecure https://172.18.0.1
<html>
<head><title>404 Not Found</title></head>
<body>
<center><h1>404 Not Found</h1></center>
<hr><center>nginx/1.19.8</center>
</body>
</html>

default:~$ curl --insecure https://172.17.0.1
<html>
<head><title>404 Not Found</title></head>
<body>
<center><h1>404 Not Found</h1></center>
<hr><center>nginx/1.19.8</center>
</body>
</html>

default:~$ curl --insecure https://10.128.0.5
<html>
<head><title>404 Not Found</title></head>
<body>
<center><h1>404 Not Found</h1></center>
<hr><center>nginx/1.19.8</center>
</body>
</html>

Notice that Nginx on kasm_proxy is reachable on the following IPs:

On the Agent server, the kasm_proxy logs showing the GET requests:

kasm@agent:~$ sudo docker logs kasm_proxy | grep curl
172.18.0.1 - - [13/Jul/2021:06:05:22 +0000] "GET / HTTP/1.1" 404 153 "-" "curl/7.58.0" "-"
172.18.0.1 - - [13/Jul/2021:06:05:29 +0000] "GET / HTTP/1.1" 404 153 "-" "curl/7.58.0" "-"
172.18.0.1 - - [13/Jul/2021:06:05:41 +0000] "GET / HTTP/1.1" 404 153 "-" "curl/7.58.0" "-"

The fix includes hard-coding the external IP address of the Agent server in the docker-compose.yaml :

  proxy:
    container_name: kasm_proxy
    image: "kasmweb/nginx:latest"
    ports:
      - "10.128.0.5:443:443

After I completely restarted all the Kasm services I again logged into the Terminal image and can no longer reach the previously accessed IPs:

default:~$ curl --insecure https://172.18.0.1
curl: (7) Failed to connect to 172.18.0.1 port 443: Connection refused

default:~$ curl --insecure https://172.17.0.1
curl: (7) Failed to connect to 172.17.0.1 port 443: Connection refused

default:~$ curl --insecure https://10.128.0.5
curl: (7) Failed to connect to 10.128.0.5 port 443: Connection refused

Ian-Kasmweb commented 3 years ago

Original comment by Matt Mcclaskey (Bitbucket: [Matt Mcclaskey](https://bitbucket.org/Matt Mcclaskey), ).


Hello, thank you for the feedback. You are absolutely right, the proxy is bound to all interfaces. First, lets walk though what happens in regards to the proxy when a user container is created. When a user container is created and it is the first time a container has been created in the docker network specified (kasm supports multiple docker networks and you can assign an image to a specific docker network), the agent will connect the kasm_proxy container to the target docker network. Next, the agent will dynamically create an NGINX config for that container and do a reload of the nginx process to pick up the changes.

All that being said, the kasm_proxy is on the same docker network as the user container and when containers are on the same docker network they are exposed to other containers on the same network. I tested your configuration this morning and while it does protect a user container on docker network A from hitting the proxy on the IP address in docker network B, they can still hit the proxy on the address for docker network A. So, at least from a docker networking perspective, there is no way not to expose the proxy to user containers, we need to proxy the streams (desktop, audio, etc) to/from the container.

I would agree, however, that more security is needed, but that additional security would have to be within NGINX configurations. Perhaps only binding NGINX services to a single interface. There is no reason NGINX needs to listen on the interfaces within the user container networks. In the future, however, we may want the containers to be able to make API calls upstream, so that would break that. We definitely can’t assume that user containers cant talk to the API servers, so any upstream communciations would need to flow through the proxy directly attached to their network. Docker networking can get quite complex. We have some users using IPVLAN docker networks and directly attaching user containers to physical networks on VLANs so that they know what IP addresses users get on the physical network and all outbound communciations go through firewalls. Therefore, we definitely need the agent’s proxy to be accessible to the user containers.

The next best thing might be to use IP access rules in the nginx config to block requests from docker networks. As you can imagene this would get quite complicated. I’ve opened an internal ticket to research this further.

Ian-Kasmweb commented 3 years ago

Original comment by Lev Elupirl (Bitbucket: [Lev Elupirl](https://bitbucket.org/Lev Elupirl), ).


Thanks for the quick reply and explanation Matt.

Regarding this comment:

they can still hit the proxy on the address for docker network A. So, at least from a docker networking perspective, there is no way not to expose the proxy to user containers

So these last few days I’ve been working on coming up with the correct iptables rules to limit ingress and egress traffic to and from the host, as well as inter-container communication. The details in my original post were actually discovered along the way.

So with the default Docker iptables rules, communication between a user container and the kasm_proxy IP address on the same network is possible, as shown here:

default:~$ curl --insecure https://172.18.0.3
<html>
<head><title>404 Not Found</title></head>
<body>
<center><h1>404 Not Found</h1></center>
<hr><center>nginx/1.19.8</center>
</body>
</html>

Edit: Everything below was done on top of the the port mapping configuration change in the original post ports: [“10.128.0.5:443:443”].

In order to make interface names deterministic for iptables rules, I first had to create a new Docker network which has a hard-coded bridge interface name. Normally, the interface is randomly generated based off the Docker network ID, for example br-c59ca0f3122f. Here I remove the original kasm_default_network and create the kasm1 network with the bridge interface name br-kasm1:

kasm@agent:~$ sudo docker network rm kasm_default_network
kasm_default_network
kasm@agent:~$ sudo docker network create --driver=bridge --subnet=172.18.0.0/16 -o "com.docker.network.bridge.name=br-kasm1" kasm1
0d728e5f8419aab94f89861b1361ad3e270023032638de12faea434ea2bde6db
kasm@agent:~$ ip addr ls | grep 'br-kasm1:'
220: br-kasm1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default

Next I updated the docker-compose.yaml to reference the new network:

version: '3'
services:
  kasm_agent:
    container_name: kasm_agent
...
    networks:
      - kasm1
...
  proxy:
    container_name: kasm_proxy
...
    networks:
      - kasm1
...
networks:
  kasm1:
    external: true

After the above changes, all Kasm services were brought up using the /opt/kasm/1.9.0/bin/start script.

Then I created the following bash script which inserts iptables rules into the DOCKER-USER chain (Docker doesn’t touch rules in this chain):

#!/bin/bash

iptables -F DOCKER-USER
iptables -A DOCKER-USER -o ens192 -d 10.128.0.1 -p udp --dport 53 -j RETURN                                  # Allow user containers outbound to DNS on default gateway
iptables -A DOCKER-USER -i ens192 -s 10.128.0.4 -p tcp --dport 443 -j RETURN                                 # Allow kasm_manager inbound to kasm_proxy
iptables -A DOCKER-USER -i br-kasm1 -o br-kasm1 -s 172.18.0.3 -j RETURN                                      # Allow all traffic from kasm_proxy to user containers
iptables -A DOCKER-USER -i br-kasm1 -o br-kasm1 -d 172.18.0.3 -p tcp --sport 6901 --j RETURN                 # VNC X11 return traffic from user containers to kasm_proxy
iptables -A DOCKER-USER -i br-kasm1 -o br-kasm1 -d 172.18.0.3 -p tcp --sport 4901 --j RETURN                 # VNC audio return traffic from user containers to kasm_proxy
iptables -A DOCKER-USER -i br-kasm1 -o br-kasm1 -s 172.18.0.3 -d 172.18.0.2 -p tcp --dport 4444 -j RETURN    # kasm_proxy to kasm_agent
iptables -A DOCKER-USER -i br-kasm1 -o br-kasm1 -s 172.18.0.2 -d 172.18.0.3 -p tcp --sport 4444 -j RETURN    # kasm_agent to kasm_proxy return traffic
iptables -A DOCKER-USER -i ens192 -p tcp --dport 443 -j LOG --log-prefix "DOCKER:REJECT:HTTPS "
iptables -A DOCKER-USER -i ens192 -p tcp --dport 443 -j REJECT
iptables -A DOCKER-USER -d 10.128.0.0/24 -j LOG --log-prefix "DOCKER:REJECT:LOCAL "
iptables -A DOCKER-USER -d 10.128.0.0/24 -j REJECT
iptables -A DOCKER-USER -i br-kasm -o br-kasm -j LOG --log-prefix "DOCKER:REJECT:ICC "
iptables -A DOCKER-USER -i br-kasm -o br-kasm -j REJECT
iptables -A DOCKER-USER -j RETURN

The above rules allow the kasm_manager, kasm_agent, and kasm_proxy to still work properly while: 1) allowing the user containers full outbound Internet access, 2) preventing those user containers from reaching each other (lines 15-16), and 3) preventing the user containers from reaching any IPs on the Agent server host network 10.128.0.0/24 (lines 12-13).

After using the above script I can no longer curl kasm_proxy:

default:~$ curl --insecure https://172.18.0.3
curl: (7) Failed to connect to 172.18.0.3 port 443: Connection refused

On the Agent server, /var/log/syslog shows the traffic being rejected between containers:

Jul 13 10:48:19 agent kernel: [419691.400862] DOCKER:REJECT:ICC IN=br-kasm1 OUT=br-kasm1 PHYSIN=veth4f564c1 PHYSOUT=veth303c09c SRC=172.18.0.4 DST=172.18.0.3 PROTO=TCP SPT=42202 DPT=443 WINDOW=64240 RES=0x00 SYN URGP=0 
Jul 13 10:48:20 agent kernel: [419692.426888] DOCKER:REJECT:ICC IN=br-kasm1 OUT=br-kasm1 PHYSIN=veth4f564c1 PHYSOUT=veth303c09c SRC=172.18.0.4 DST=172.18.0.3 PROTO=TCP SPT=42202 DPT=443 WINDOW=64240 RES=0x00 SYN URGP=0

I’m not sure if this is something you guys plan on adding, but I think perhaps mentioning it in the documentation might useful. I know for my use case(s) it is a requirement.

Ian-Kasmweb commented 3 years ago

Original comment by Matt Mcclaskey (Bitbucket: [Matt Mcclaskey](https://bitbucket.org/Matt Mcclaskey), ).


This is great stuff, we have looked into blocking inter-container communications before but this provides a lot more fidelity. Unfortunately, the kasm_proxy IP address within each docker network is random. Though it could be hard coded in the compose file, it is definitely not something we could program for in a universal way. Our clients have extremely complex networking configurations, where some agents have 30+ docker networks and the environment is constantly changing. It would not be at all practical to manage complex IP tables for these situations. But I would agree that documentation on the subject would provide people a way to gain a higher level of security.

Likely, the universal method we will use will involve NGINX configurations, such as using SSL mutual cert based auth for communications between agents and the manager components, blocking URL paths to the API services from user containers, and more. This may not meet your requirements if you need absolute layer 3 separation. Stay tuned, we will try to update this ticket when we have more information.

Ian-Kasmweb commented 2 years ago

Original comment by James Tervit (Bitbucket: [James Tervit](https://bitbucket.org/James Tervit), ).


This is a massive help, I removed the server as it was a risk due to the interface being open to the public side on my bare metal. Once I get more time I will come back to it.