Closed Lillyyouwu closed 1 year ago
Hello @Lillyyouwu!
Sorry for the late reply. Can you please share the output of kubectl -n jit get pod myjitsi-prosody-0
with us? I'm almost sure that the cause for "Pending" status will be mentioned there somewhere.
One possible reason might be that since Prosody is a StatefulSet
, it requires some kind of persistent storage to be able to work properly. If you don't have any dynamic storage provisioners (which I assume you don't, since I don't see any in your pod list) or if you didn't set up some static PV beforehand — Prosody will be stuck in "Pending" waiting for some persistent storage.
Thank you for replying me! :D
The relevant output of my prosody-0
are these:
root@k8s-master:/home/raccoon/myjit# kubectl -n jitsi get pod myjitsi-prosody-0
NAME READY STATUS RESTARTS AGE
myjitsi-prosody-0 0/1 Pending 0 4m48s
root@k8s-master:/home/raccoon/myjit# kubectl get pvc -A
NAMESPACE NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
jitsi prosody-data-myjitsi-prosody-0 Pending 10m
root@k8s-master:/home/raccoon/myjit# kubectl describe pvc prosody-data-myjitsi-prosody-0 -n jitsi
Name: prosody-data-myjitsi-prosody-0
Namespace: jitsi
StorageClass:
Status: Pending
Volume:
Labels: app.kubernetes.io/instance=myjitsi
app.kubernetes.io/name=prosody
Annotations: <none>
Finalizers: [kubernetes.io/pvc-protection]
Capacity:
Access Modes:
VolumeMode: Filesystem
Used By: myjitsi-prosody-0
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal FailedBinding 77s (x42 over 11m) persistentvolume-controller no persistent volumes available for this claim and no storage class is set
As you say, it's some problem about persistent storage. I also notice that there are same situation in #63, but I don't know how to fix it. Do I have to claim something in values.yaml, or I should get another dynamic storage provisioners? (Sorry about these stupid questions TAT, I am very new to k8s, just trying to get jitsi work).
Also, I am trying to deploy jitsi in KubeEdge later, in this case, which topology do you recommend? ouo
There are three ways to solve this problem:
local-path-provisioner
, available on GitHub;.Values.prosody.persistence.enabled
to false
, like this:
# values.yaml
prosody:
persistence:
enabled: false
Personally, I'd go with the Option 1 and set up a proper persistent storage provisioner, especially considering that you have a two-node cluster.
As for the topology, unfortunately, I don't have any experience with KubeEdge (yet), so I'll try to give you advice based on my experience. If you don't plan to have a ton (more than 50) of users chatting simultaneously, you'll be fine with running Jitsi on a single node, provided that you can allocate this for Jitsi specifically:
Also a good thing to keep in mind is that you should install Jitsi close to the place the most users are living in, e.g. if most of your userbase are living in Southeast Asia, then you should deploy it there or as close as possible so that your users' RTT would be really small.
Hi, I am still trying to make my jitsi work. I deployed my jitsi with this values.yaml
file, just trying to make it work inside the cluster. I can reach the web page with the port-forwarding strategy, but I can't connect to the web page with the cluster IP of myjitsi-jitsi-meet-web
. is that normal?
I deployed jitsi in KubeEdge with the same values.yaml
, and it's fine to reach the web page with the ClusterIP.
publicURL: "meet.raccoon.com"
jvb:
useHostPort: true
# Use public IP of one (or more) of your nodes,
# or the public IP of an external LB:
publicIPs:
- 192.168.186.180
prosody:
persistence:
enabled: false
As for the nodeport strategy, is there still any additional service needed for jvb/web like #63? Or just creating a corresponding ingress is ok?
Thank you soooo much for getting back to me. I'm trying out your advices, still looking for what works for me > <
Yes, this is normal. ClusterIP
services are expected to be available only inside the cluster (i.e. to member nodes and pods), not to outside users.
As for the connectivity issues, there are two main points:
jitsi-web
pod from the outside world. You can do it either via Ingress, or with a NodePort
service that points to the jitsi-web
pod.I'm going to close the issue for now. Please let me know if you have any problems with your Jitsi Meet installation.
Hi, I am trying to deploy jitsi in LAN with this
values.yaml
. I am using kubernetes v1.22.1, helm v3.9.4. My cluster have one master node and one agent node.and I run this
but I found my
prosody-0
is always at pending, and myjvb
pod keep restarting. I have these pods:the logs for 'jvb' are:
I don't know how to solve it, probably because my bad
values.yaml
... can anyone help with this? sincerely thank! T T