jitsi-contrib / jitsi-helm

A helm chart to deploy Jitsi to Kubernetes
MIT License
136 stars 75 forks source link

Not able to enable Recording Service without Jibris deployed #69

Closed kpeiruza closed 1 year ago

kpeiruza commented 1 year ago

We have several Jitsi deployments sharing a common hpa-jibri deployment for all them.

If you enable recording, you will always get at least 1 jibri even if you specify 0 on ReplicaCount. Number of Replicas is always >=1.

values.yaml:

    ## Enable multiple Jibri instances.
    ## If enabled (i.e. set to 2 or more), each Jibri instance
    ## will get an ID assigned to it, based on pod name.
    replicaCount: 0

Changing the default value from 1 to 0 in the helm template could fix this:

templates/jibri/deployment.yaml:

spec:
  replicas: {{ .Values.jibri.replicaCount | default 0 }}
  selector:

So, the deployment will respect the number of replicas specified in the helm chart, from 0 to N, instead of current >=1.

Thanks

spijet commented 1 year ago

Hello @kpeiruza!

Just to clarify, you want to use the chart with an external autoscaled Jibri deployment, right? If so, I'll need some to come up with a proper way to allow to enable recording/streaming service and create an account for Jibri while not deploying any bundled Jibri instances. Or maybe come up with some kind of autoscaling for in-chart Jibri as well. :)

kpeiruza commented 1 year ago

Hi @spijet ,

You got it right.

So, we need the account for jibri to be created whilst no jibris are spawned.

Most users don't need any jibri most of the time, so we share an hpa deployment connected to several different jitsis (prosodys), and we need to fix current jitsi-helm with kustomize to force 0 replicas.

Kind regards,

Kenneth

spijet commented 1 year ago

OK, I'll try to figure out a nice way to declare it in chart values and push an update. :)

kpeiruza commented 1 year ago

I've tried fixing that, as the default value for "jibri.replicas" in values.yaml is 1, by default it will keep on working as it currently does, but it will respect 0 if configured ;-)

IMHO it is a quite good way, as you can always scale replicas up/down. The deployment will exist and I think that's okay.

spijet commented 1 year ago

In many cases it might be desirable to avoid creating an unused deployment and other stuff (like PVCs and services), so I'd rather make an option that allows to skip the creation of resources that are unneeded when using an external Jibri deployment. :)

kpeiruza commented 1 year ago

Agreed.

Then it's a bit more complicated because you're basing the creation of the jibri account on jibri.enabled == true.

spijet commented 1 year ago

The changes introduced in c12a4fb should do the trick. :)

Can you please check if these work for your case and report back? I'll pack a new release if all goes well.

kpeiruza commented 1 year ago

It rocks! thank you for closing these 2 issues so quickly.

Couldn't it be possible to add some github-actions so other people could offload you a bit with keeping up to date versions of docker images & helm chart versions?

Regenerating packaging & regenerating index should be automated for your own sake.

I.e.

Feel free to ask for help, one of our customers (Barcelona City Council) is willing to help the community, as they have 7 different Jitsis in Kubernetes, with our own Helm chart, and we're migrating them to this chart.

In any case, thank you for your good work :-)

spijet commented 1 year ago

I'm actually thinking about it, and there's also issue #14 that could be a hint to one of possible destinations for automated chart packaging. Gotta carve out some time to actually do the research. :)

Feel free to ask for help, one of our customers (Barcelona City Council) is willing to help the community, as they have 7 different Jitsis in Kubernetes, with our own Helm chart, and we're migrating them to this chart.

Wow, this got way bigger than I ever anticipated! :D I started contributing to this chart some time ago to give back all the edits/fixes I made at my current job. Didn't even think I'd become a proper maintainer someday. I'm happy to hear that so many people use it daily and are (mostly) happy with it! 🧡

kpeiruza commented 1 year ago

Hahahahaha, The Internet can surprise you :-)

I made a Docker image 3 yrs ago, and I've never really used it, but one day I saw >= 170K downloads at hub.docker.com and then I've found tutorials for using it, a Helm chart and even an AWS deployment with Terraform to deploy password-cracking farms controlled with it LOL.

Just for you to know, we've 5 productive deployments using jitsi-helm ATM and +10 more next week :-)

spijet commented 1 year ago

Sounds like quite an adventure. :)

Oh BTW, since you have many production deployments running already, may I ask you about how you manage the UDP traffic between the clients and JVB? Do you use any kind of a proxy/relay or do you just expose the JVB ports with NodePort or hostPort?

kpeiruza commented 1 year ago

Hi @spijet,

We use hostPort whenever possible, so we don't have so many trouble with STUN and we can have multiple JVB, but some firewalls don't like "funny ports" and we always need to rely on coturn for these users. OFC, that only works when you have nodes exposed to the Internet, so, in some deployments we simply scale JVB down to zero and set the JVBs on VMs, also because GKE and Azure don't allow us to fine-tune most of the network buffers as suggested by JVB's setup.

We use jitsi-helm at 100% only when we can do so in the machines (i.e. kubeadm-based clusters where we've root access) or when the load is going to be low and it doesn't really matter to fine-tune it.

For high-load environments we mix K8s for "the core stack" and farms of JVB scaled with Terraform.

ATM, we're deploying a helm that uses this helm as a dependency. Our helm extends it with some custom features TL;TR; for specific use cases.

I'm in charge of making our helm chart as closer to this helm as possible, so, I'll be sending you some PRs to add a few extra features like coturn support or additional UI configuration (i.e. branding).

I.e. In our helm we've a configmap for prosody's coturn config:

apiVersion: v1
kind: ConfigMap
metadata:
  name: prosody-coturn
data:
  coturn.cfg.lua: |
    external_service_secret = "{{ .Values.coturn.secret }}";
    external_services = {
            { type = "turns", host = "{{ .Values.coturn.server }}", port = 443, transport = "tcp" , secret = true, ttl = 86400, algorithm = "turn"}
    };
{{- end }}

Then we mount this coturn.cfg.lua inside prosody's /defaults/conf.d/coturn.cfg.lua , load smacks & externalservices modules in prosody and it just works :-)

IMHO, this chart could be extended so it loads a new coturn helm chart as a dependency if activated.

That way you'll get a fully functional firewall-proof jitsi-meet with just 1 helm install. It's not something we actually need, but it's going to be okay to help contribute it if you like the idea.

spijet commented 1 year ago

Thank you for all the useful info! :)

IMHO, this chart could be extended so it loads a new coturn helm chart as a dependency if activated.

Yep, that's what I want to do someday, since it might help a lot of users who have problems with UDP traffic. But, considering my laziness and constant lack of time nowadays, I thought that Jitsi devs might release an "official" prepared coturn image before I can figure out how to use it. :D Any help would be much appreciated!

kpeiruza commented 1 year ago

Yes, it makes sense to wait for such an image, as there's already a Debian package for jitsi-coturn.

Probably we can just create a simple docker image based on jitsi-base + install coturn + an entrypoint script to configure it and launch it.

I'd suggest getting rid of services.d, as this is a single-service container and spawning "service spawners" it's usually an antipattern in the container world. With a bit of luck, they'll kill all services.d but the ones in jibri and jvb at some point :-)

Missatge de Serge Tkatchouk @.***> del dia dv., 10 de febr. 2023 a les 12:37:

Thank you for all the useful info! :)

IMHO, this chart could be extended so it loads a new coturn helm chart as a dependency if activated.

Yep, that's what I want to do someday, since it might help a lot of users who have problems with UDP traffic. But, considering my laziness and constant lack of time nowadays, I thought that Jitsi devs might release an "official" prepared coturn image before I can figure out how to use it. :D Any help would be much appreciated!

— Reply to this email directly, view it on GitHub https://github.com/jitsi-contrib/jitsi-helm/issues/69#issuecomment-1425677593, or unsubscribe https://github.com/notifications/unsubscribe-auth/ADI5TYXGCISLKXLUFPBVV2LWWYR6RANCNFSM6AAAAAAUVJPEQA . You are receiving this because you were mentioned.Message ID: @.***>

spijet commented 1 year ago

It certainly is an anti-pattern, but I've seen a couple of cases where the containerized app misbehaves and produces zombie processes over time (one notorious example is Docker-in-Docker daemon with GitLab CI Executor attached to it). Usually starting such containers with --init fixes the problem. :)