Closed vorburger closed 6 months ago
Thanks for the report! Yeah #192 should solve this. We're kinda busy now, hopefully I can take a look at it and make more progress soon.
Actually I got confused and mixed something up myself... the problem here is likely NOT the use of root inside the container, and #192 may not help. This is only about the volumes
in docker-compose.yml, and :Z
selinux suffix may be all that's needed here..
I'll try this out (after I'm done with an initial basic set-up using docker instead podman).
Oh, nice! Let us know how that goes! If you feel like it, you could share your podman-compose and add it here: https://github.com/jitsi/docker-jitsi-meet/tree/dev/examples
share your podman-compose
That's the beauty of it - there is no podman-compose YAML, podman-compose
can just use docker-compose.yml
!
With PR #204 and after sudo sysctl net.ipv4.ip_unprivileged_port_start=80
it goes much further, but the web container still doesn't entirely manage to start and now fails with:
[cont-init.d] 10-config: exited 0.
[cont-init.d] done.
[services.d] starting services
[services.d] done.
nginx: [emerg] invalid number of arguments in "proxy_pass" directive in /config/nginx/meet.conf:31
nginx: [emerg] invalid number of arguments in "proxy_pass" directive in /config/nginx/meet.conf:31
nginx: [emerg] invalid number of arguments in "proxy_pass" directive in /config/nginx/meet.conf:31
nginx: [emerg] invalid number of arguments in "proxy_pass" directive in /config/nginx/meet.conf:31
Need to further investigate that some other time.
Nice! I think the problem might be that we use a user defined network to route traffic across containers using a FQDN that doesn’t exist: xmpp.meet.jitsi
The same errors are occurring in this issue: https://github.com/jitsi/docker-jitsi-meet/issues/254
@sapkra Why do you think those are related?
Just because the error is the same. Maybe it has the same reason...maybe not.
nginx: [emerg] invalid number of arguments in "proxy_pass" directive in /config/nginx/meet.conf:31
While the error is the same (the templating failed) it could have been triggered for unrelated reasons I think.
Just because the error is the same. Maybe it has the same reason...maybe not.
nginx: [emerg] invalid number of arguments in "proxy_pass" directive in /config/nginx/meet.conf:31
I had the same issue and I resolve it by adding env_file. See my updated docker-compose.yml Do not forget to clean the configuration directory before trying it.
You also may need to change the owner of jitsi-meet-cfg/prosody/data to deal with the none root prosody daemon: podman unshare chown 101:102 jitsi-meet-cfg/prosody/data
@Tinigriffy I just tried that, now I have the nginx: [emerg] host not found in upstream "xmpp.meet.jitsi" in /config/nginx/meet.conf:35
error message :(
@Tinigriffy I just tried that, now I have the
nginx: [emerg] host not found in upstream "xmpp.meet.jitsi" in /config/nginx/meet.conf:35
error message :(
Did you clean the configuration folder before running this version? Do you lauch the yaml in the directory where is the .env? By the way here is the very last version of the yaml I used docker-compose.txt Probably it doesn not change anything compare to the previous version, but this is the one I really used to launch everything. With rootless env everything started but I could not have a working video call with 2 browsers open on the same server. Probably a nat/forwarding issue. So I have it run with root env.
Ah I see what I was missing the extra hosts thing, I'll try this thanks for the reply :)
Currently I am creating files to create podman images in order to later on run the containers with the systemd.
But when I run a prosody container, I get the following errors
var/run/s6/etc/cont-init.d/10-config: line 4: /usr/bin/tpl: Permission denied ... /var/run/s6/etc/cont-init.d/10-config: line 30: /usr/bin/tpl: Permission denied /var/run/s6/etc/cont-init.d/10-config: line 31: /usr/bin/tpl: Permission denied
What could be the reason for the Permission denied ?
Just a heads up I created a full guide for this as I was tired of not finding good information: http://tendie.haus/how-to-setup-a-basic-jitsi-instance-with-podman/
podman pod create --name jitsi
--add-host=meet.jitsi:127.0.0.1
--add-host=jvb.meet.jitsi:127.0.0.1
--add-host=jicofo.meet.jitsi:127.0.0.1
--add-host=jigasi.meet.jitsi:127.0.0.1
--add-host=jvb.meet.jitsi:127.0.0.1
--add-host=xmpp.meet.jitsi:127.0.0.1
--add-host=auth.meet.jitsi:127.0.0.1
--add-host=auth.meet.jitsi:127.0.0.1
--add-host=muc.meet.jitsi:127.0.0.1
--add-host=internal-muc.meet.jitsi:127.0.0.1
--add-host=guest.meet.jitsi:127.0.0.1
--add-host=recorder.meet.jitsi:127.0.0.1
--add-host=etherpad.meet.jitsi:127.0.0.1
The easiest way to get up and running might be to create a pod. Pods have a single IP address. Communicating between containers in a pod is as easy as using localhost—no need to expose ports or configure dnsmasq DNS resolution. I've added host aliases above so that names used within jitsi resolve to localhost. You'll probably want to either --publish-port
to expose 80 and 443 OR join your pod to a CNI --network
. When you create your other containers, join them to this pod.
Then a
podman generate systemd --name --files jitsi
will generate unit files for everything in the pod.
I generally prefer to create my infrastructure in ansible, rather than via docker-compose, so I just did that. Maybe there is a way to do this with podman-compose too.
I would very much prefer it if these containers could be made to start without root. There is no reason why jicofo
and friends need root. The nginx instance doesn't need root either if you already have a TLS proxy/gateway and don't want to bind to port 80. These images all appear to use the s6-overlay
, however, and they won't start without root. It does take a little work up-front to get the SELinux and DAC permissions right on your host volumes… but if you do, many containers can be started with --user some_service_user --userns=keep-id
or the like.
Last time I checked, inter-container DNS resolution is still not enabled by default on podman's default internal network. This causes problems between containers that connect to each other by name. You can podman network create
a new one, and it will have dnsmasq DNS by default. Join to it with --network
. Vexingly, nginx doesn't respect /etc/resolv.conf
, and you usually need to provide a resolver
configuration directive with the gateway IP of your --network
. And that still seems iffy to me; I've had nginx continue to use stale IPs after container restarts. Pods are better.
I followed the Quick start of the Self-Hosting Guide - Docker up to point 6 and managed to start a jitsi instance passing extra arguments to podman-compose
:tada:
podman-compose --podman-run-args "--env-file .env --add-host xmpp.meet.jitsi:127.0.0.1" up -d
<tl;dr>
For what it's worth I went down the podman-compose up
road yesterday and like to wrap up what I have learned:
nginx: [emerg] host not found in upstream "xmpp.meet.jitsi" in /config/nginx/meet.conf:35
I had the same issue and I resolve it by adding env_file. See my updated docker-compose.yml Do not forget to clean the configuration directory before trying it. [...]
If anyone likes to reproduce, this my approach using ansible: grrvs/ansible_podman-compose_jitsi
For anybody looking to do this now with rootless podman, there are a bunch of differences compared to the older answers with newer versions of podman and podman-compose. The most notable are:
The way I did it was using the following docker compose:
version: '3'
services:
# Frontend
web:
image: docker.io/jitsi/web:unstable
ports:
- '192.168.X.X:${HTTPS_PORT}:443'
volumes:
- ${CONFIG}/web:/config:Z
- ${CONFIG}/web/letsencrypt:/etc/letsencrypt:Z
- ${CONFIG}/transcripts:/usr/share/jitsi-meet/transcripts:Z
env_file:
- ./.env
# XMPP server
prosody:
image: docker.io/jitsi/prosody:unstable
expose:
- '5222'
- '5347'
- '5280'
volumes:
- ${CONFIG}/prosody:/config:Z
env_file:
- ./.env
# Focus component
jicofo:
image: docker.io/jitsi/jicofo:unstable
volumes:
- ${CONFIG}/jicofo:/config:Z
env_file:
- ./.env
depends_on:
- prosody
# Video bridge
jvb:
image: docker.io/jitsi/jvb:unstable
ports:
- '192.168.X.X:${JVB_PORT}:${JVB_PORT}/udp'
volumes:
- ${CONFIG}/jvb:/config:Z
env_file:
- ./.env
depends_on:
- prosody
And the relevant parts of the .env
are:
# Exposed HTTP port
HTTPS_PORT=42445
# System time zone
TZ=Australia/Perth
CONFIG=./config
XMPP_SERVER=prosody
XMPP_PORT=5222
XMPP_BOSH_URL_BASE=http://prosody:5280
JVB_PORT=10000
JVB_WS_SERVER_ID=jvb
# Public URL for the web service (required)
PUBLIC_URL=https://public-url-here
JVB_ADVERTISE_IPS=internal-ip,external-ip,listed-here
I had to dig through things a bit to figure some stuff out, but reasoning is as follows:
web
container does not use the XMPP_SERVER
env variable to build the XMPP_BOSH_URL_BASE
, so the reverse proxy (from web
to prosody
) internally doesn't work as default if you're not using xmpp.meet.jitsi
as the hostname for prosody'192.168.X.X:${JVB_PORT}:${JVB_PORT}/udp'
) - this part is only relevant if you have multiple IPs on the host machine you're running podman on. I had packets coming into one addressing and coming out of the other. That broke things with JVB. Binding it to a specific IP fixed that.My setup is with a reverse proxy at the edge of my network handling SSL termination with it reverse proxying to the web
container, and port 10000 punched through the firewall going straight to my server. I've tested this with 2 devices connected internally, 2 devices connected externally, with screen sharing and 3 video streams. All working properly.
Just to add another recent data point: hosting Jitsi meet with rootless podman and podman-compose worked for me out of the box.
Following the self-hosting guide and using the default docker-compose.yml
was enough.
On Debian Bookworm (stable) with podman 4.3.1, podman-compose 1.0.3 and apache 2.4.57 reverse proxy, only the config by @alexmaras + s/unstable/stable-9258/g
+ s/192.168.X.X://g
+ COLIBRI_WEBSOCKET_REGEX=jvb
results in a working conference with working video in three tabs. However, the first WebSocket connection from Firefox 123 to jvb fails with an error about bridge channel disconnected, despite netcat saying the UDP port 10000 is open. Then it retries the connection and seems to succeed. I didn't find anything relevant in podman, apache or console logs.
However, the first WebSocket connection from Firefox 123 to jvb fails with an error about bridge channel disconnected, despite netcat saying the UDP port 10000 is open. Then it retries the connection and seems to succeed. I didn't find anything relevant in podman, apache or console logs.
Interesting. Is this reproducible? Is it just the first connection after the bridge is restarted, or the first connection any time you open a conference in firefox? The WebSocket to JVB uses TLS/443, not UDP/10000 (not sure exactly how it's routed with the default docker/podman setup), but I don't see why it would fail initially and then succeed.
I see the mentioned bridge error message during every conference, at the moment the other person joins. According to the Network tab of the inspector, the first request to /colibri-ws/jvb/ gets ns_error_websocket_connection_refused (on the TLS port indeed, I now see it matches a 502 status code in Apache logs), then another request gets 101 Switching Protocols and continues. Here are my virtual host lines for the reverse proxy:
SSLProxyEngine on
ProxyPreserveHost on
ProxyTimeout 900
ProxyPass / https://localhost:8443/ upgrade=websocket
ProxyPassReverse / https://localhost:8443/
Replacing the upgrade=websocket
with a separate ProxyPass for /xmpp-websocket wss and another one for /colibri-ws/, followed by reloading the apache2 service, results in the same behavior. Regarding how reproducible the environment is--not much, it's a Debian server manually configured over SSH, unattended upgrades enabled for latest stable, no third-party repositories added system-wide, running a few other rootless podman-compose and Node.js apps.
I personally prefer using https://podman.io instead of Docker (because containers dont' run as root, on the host; although you can still be root inside the container..), and when I tried that with this project (FYI they have a
podman-compose
which, in my experience, is reasonably compatible withdocker-compose
) but I've noticed that the images of this project don't yet work with Podman instead of Docker. The web container for example failed with the error below (I hadn't even checked the others.)The short-term workaround is, of course, to just use Docker instead of Podman for now, but I thought I'd at least just let you know about this by filing this issue here.
Glancing over this project, I've noticed open PRs #192 and #126, it's possible they help with this.
@saghul more of an FYI