Open thielj opened 5 months ago
@CommanderStorm as discussed last week
I think Portainer did that in the same way.
docker run -d -p 8000:8000 -p 9443:9443 --name portainer --restart=always -v /var/run/docker.sock:/var/run/docker.sock -v portainer_data:/data portainer/portainer-ce:latest
May need a reason why Portainer is ok to do that, but Uptime Kuma cannot.
Portainer, Yacht, etc aren't really okay to do that either and I'm proxying them, too. However...
Portainer needs much wider access to the docker API for creating containers, networks, etc; Without almost full access, they couldn't provide much functionality at all. My choice is between granting such access or not using Portainer at all.
Uptime Kuma OTOH only requires minimal access to check if a container is running. There simply is no need to give a running UK container root access to the host through the docker socket - needing to start it as root is already bad enough ;)
admins hopefully secure and restrict access to the Portainer UI to internal networks while it isn't unusual for Uptime Kuma to be fully exposed to the public (status pages, API, badges, etc)
I have some hopes that Portainer is spending some of their VC on security audits
If an app would ask you to enter your root password or supply an ssh key for a root shell, you would think twice before doing so. People don't realize that the very same applies to mounting the docker socket.
No offense man, you're doing great otherwise!
I don't know how to set up this proxying. I think adding it as a tip to the docs such as
[!TIP] You can reduce your attack surface ...
=> I think this would be valuable.
You seem to have to have the whole thing figured out. Could you provide a PR to enhance https://github.com/louislam/uptime-kuma-wiki/blob/master/How-to-Monitor-Docker-Containers.md?
The API call currently used - GET /containers/{id}/json exposes environment variables and a whole lot more.
Ideally, the use of docker API within Uptime Kuma would be restricted to GET /_ping to check the connection parameters and a sanitized GET /containers/{name}/json.
Happy to contribute a fully configured and locked down proxy container with setup information.
Try this, even if you're already proxying: curl 'http://localhost:2375/containers/json' | jq
It includes detailed information, including labels used to provide passwords for auth middlewares, network information, credentials passed in commands, credentials used for CIFS/NAS mounts, mount paths, exact version information, etc.
There currently isn't a docker API that exposes less information.
This should do it: https://github.com/thielj/docker-health-proxy/pkgs/container/docker-health-proxy
It currently leaves the /containers/json
API enabled to simplify testing with current U-K release.
I suggest replacing this with /_ping
which is also official Docker Engine API but doesn't expose any sensitive information. You wouldn't get the 'number of containers' info though. If you think that's significant, that API call would need to be sanitized as well.
If you agree, I suggest deferring an update of the docs until /_ping
has made it into an Uptime Kuma release and the /containers/json
route has been nuked from the proxy.
I've built the final version (v1.0.0). It requires /_ping
instead of /containers/json?all=true
for the connection test. Or rather, you will get a 403 error when you hit Test, as it should be ;)
If you want to test against /containers/json?all=true
, use v0.0.0.
Regarding the docker remote setups you're recommending as "secure"... https://thehackernews.com/2024/06/commando-cat-cryptojacking-attacks.html
Thank you very much @thielj for links to tests, much helpful.
I use this approach in my homelab. Might be useful for somebody, may be :)
This approach exposes too much, please also check other comments below.
It uses Tecnativa/docker-socket-proxy Proxy over your Docker socket to restrict which requests it accepts
docker compose for it as follows. Use http://docker-socket-proxy:2375
as url for docker TCP/HTTP connection type
services:
uptime-kuma:
image: louislam/uptime-kuma:1
container_name: uptime-kuma
volumes:
- ./data:/app/data
expose:
- 3001
restart: always
labels:
- 'com.centurylinklabs.watchtower.enable=true'
cpu_count: 1
mem_limit: 512m
security_opt:
- no-new-privileges:true
networks:
- local-int
- myint
docker-socket-proxy:
image: tecnativa/docker-socket-proxy
container_name: docker-socket-proxy
healthcheck:
test: ["CMD", "nc", "-z", "localhost", "2375"]
restart: always
environment:
CONTAINERS: 1 # Allows access to /containers/*
volumes:
- /etc/localtime:/etc/localtime:ro
- /var/run/docker.sock:/var/run/docker.sock:ro
networks:
- local-int
cpu_count: 1
mem_limit: 512m
security_opt:
- no-new-privileges:true
expose:
- '2375'
labels:
- 'com.centurylinklabs.watchtower.enable=true' # you might NOT want to use watchtower with this one
networks:
local-int:
myint: # your reverse-proxy network
external: true
@ColCh The tecnativa proxy exposes far too much information, including for example secrets passed in environment variables, command line parameters, credentials for network mounts, etc.
The nginx based proxy I created doesn't just restrict API methods, but also sanitizes the results to what's absolutely necessary. See the README.
I had to use nginx because haproxy doesn't support result filtering. You can audit the code if you wish, it's just a couple of lines. Memory footprint is very similar.
There's also a security advisory related to this where I show how a fresh installation of Uptime Kuma can be used to create a root shell through an exposed docker socket.
@ColCh The tecnativa proxy exposes far too much information, including for example secrets passed in environment variables, command line parameters, credentials for network mounts, etc.
The nginx based proxy I created doesn't just restrict API methods, but also sanitizes the results to what's absolutely necessary. See the README.
I had to use nginx because haproxy doesn't support result filtering. You can audit the code if you wish, it's just a couple of lines. Memory footprint is very similar.
omg, you're right! really interesting
thank you so much for pointing that out, I will correct my local setup and that comment for sure
I've built the final version (v1.0.0). It requires
/_ping
instead of/containers/json?all=true
for the connection test. Or rather, you will get a 403 error when you hit Test, as it should be ;)If you want to test against
/containers/json?all=true
, use v0.0.0.
It might be worth of note the "Test" button (inside Uptime Kuma's Docker Host settings) also appears to be an API call which gets denied by the proxy π. So go directly for "Save" after inputting the connection info.
@shalafi99 That's intended as the proxy also blocks the listing of all containers π§
I suggested that the Test button uses the official /_ping
API call instead, which doesn't leak any information.
π I have found these related issues/pull requests
π‘οΈ Security Policy
Description
Access to the docker socket is almost equivalent to a root shell, no matter if the socket is mounted read-only or made available through a (SSL) network connection. Instead of the procedure suggested in the docs, a much better approach would be to expose the socket through a proxy that makes only the necessary read-only API available to Uptimee Kuma through an internal docker network.
I'm using Tecnativa/docker-socket-proxy for that purpose. See the docs there.
I'm deliberately reporting this as a "documentation bug" and not a direct vulnerability to Uptime Kuma as it requires the host system or ANY container or any other system having access to the exposed socket to be compromised. However, suggesting this to users who are probably unaware of the implications is simply bad practice as it allows an attacker to immediately acquire root privileges.
π Reproduction steps
π Expected behavior
n/a
π Actual Behavior
n/a
π» Uptime-Kuma Version
all versions
π» Operating System and Arch
Any Linux
π Browser
n/a
π₯οΈ Deployment Environment
Monitoring docker containers
π Relevant log output
No response