Closed kpeiruza closed 3 years ago
Hi @kpeiruza, trying to use your jibri + pulseaudio image in my jibri.yml config as follows:
version: '3.5'
services:
jibri:
image: kpeiruza/jibri:v12
restart: ${RESTART_POLICY}
volumes:
- ${CONFIG}/jibri:/config:Z
- /dev/shm:/dev/shm
cap_add:
- SYS_ADMIN
- NET_BIND_SERVICE
environment:
- XMPP_AUTH_DOMAIN
- XMPP_INTERNAL_MUC_DOMAIN
- XMPP_RECORDER_DOMAIN
- XMPP_SERVER
- XMPP_DOMAIN
- JIBRI_XMPP_USER
- JIBRI_XMPP_PASSWORD
- JIBRI_BREWERY_MUC
- JIBRI_RECORDER_USER
- JIBRI_RECORDER_PASSWORD
- JIBRI_RECORDING_DIR
- JIBRI_FINALIZE_RECORDING_SCRIPT_PATH
- JIBRI_STRIP_DOMAIN_JID
- JIBRI_LOGS_DIR
- DISPLAY=:0
- TZ
depends_on:
- jicofo
networks:
meet.jitsi:
networks:
meet.jitsi:
external:
name: custom_jitsi_network
But recording is failing. With jitsi/jibri:latest image it works fine. Could you advice on how to find & fix the problem?
This has nothing to do with jibri. While I appreciate the feedback and we hope to move jibri to pulseaudio eventually, this project has to do with autoscaling instances in different clouds for more than just Jibri, and so this isn't an appropriate venue for this issue. If you haven't already, please open an issue in the jibri project.
@aaronkvanmeerten It looks like you don't know much about Kubernetes.
This has nothing to do with autoscaling.
Please, get some knowledge about how GKE, AKS or Amazon's Kubernetes work before closing this thread with such a point.
Jibri as it's shipped right now by Jitsi's team can't be executed on Kubernetes, because there's no option to load snd-aloop, as these managed Kubernetes are using custom Linux Kernels from each provider, and they lack snd-aloop.
So, you have a Docker that can't be executed on the nowaday's standard for computing and #1 solution on Docker Orchestration: Kubernetes.
Then, make Pulseaudio available and it will be possible to run Jibri on Kubernetes. Please, reopen this issue.
PS: community.jitsi.org has a thread on this topic and has been there for almost 1 year.
I fully agree we need to move jibri to pulse. My point is that this project where the issue resides is the autoscaler project which is intended to scale VMs based on utilization and therefore isn't the appropriate venue for this issue.
I see you are frustrated regarding jibri and can respect that. Running jibri in docker or k8s isn't our team's primary use case for it but that doesn't mean we don't want to support it, simply that we haven't had time to put cycles towards it. We hope to spend time in it this year but again this isn't the right project to be discussing any of it.
Hi @aaronkvanmeerten you're right about Jitsi-autoscaler and this requirement. My apologies!
Hi,
We've been using jibri+pulseaudio in all our streamings & recordings for up to 6 months. So far it has been tested more than 2000 times and every day we have 30+ tests. It simply works better than Alsa.
This made it trivial to scale Jibri on Kubernetes. We're doing it based on CPU consumption so K8s's HPA does the work for us.
We're planning to feed prometheus with Jibri statuses so we can scale it in an even more clean and exact way.
Feel free to share your thoughts and let's team up to get it up and running :-)
PS: you can try kpeiruza/jibri:v12, it has pulseaudio already bundled + a few tweaks at jibri so it launches ffmpeg against pulse instead of Alsa.