minhpq331 / jitsi-scalable-helm

Scalable jitsi helm chart
MIT License
37 stars 14 forks source link

unable to create JVB #1

Closed bharath-naik closed 2 years ago

bharath-naik commented 2 years ago

hi @minhpq331 … thanks for the contributions made to jitsi scaling, it a difficult topic. i am trying to setup on a single cluster in GKE with two shards. Unfortunately JVB service are not getting created. If you could, please guide me through. I am missing something small that i am unable to spot.

# Default values for jitsi.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.

nameOverride: "socialhour"
fullnameOverride: "socialhour"

# Global configs which used by all components
global:
  # Internal XMPP domain configs
  xmpp:
    domain: socialhour.vewmet.com
    authDomain:
    mucDomain:
    internalMucDomain:
    guestDomain:

  # Jitsi Docker environment. For more options checkout .env.example 
  # from this repo: https://github.com/jitsi/docker-jitsi-meet
  env: 
    JVB_WS_DOMAIN: ws.socialhour.vewmet.com
    PUBLIC_URL: https://socialhour.vewmet.com

    ENABLE_AUTH: 0
    ENABLE_GUESTS: 0
    ENABLE_COLIBRI_WEBSOCKET: 1
    GLOBAL_CONFIG: statistics = "internal";\nstatistics_interval = 15;
    GLOBAL_MODULES: prometheus,measure_stanza_counts,measure_client_presence
    XMPP_MUC_MODULES: muc_meeting_id,muc_domain_mapper

    JIBRI_BREWERY_MUC: jibribrewery
    JIBRI_RECORDER_USER: recorder
    JIBRI_XMPP_USER: jibri

    JICOFO_AUTH_USER: focus

    JIGASI_BREWERY_MUC: jigasibrewery
    JIGASI_XMPP_USER: jigasi

    JVB_AUTH_USER: jvb
    JVB_BREWERY_MUC: jvbbrewery
    JVB_ENABLE_APIS: colibri,rest

    TZ: UTC

    # For more environment please check docker repo: https://github.com/jitsi/docker-jitsi-meet
    # JWT_APP_ID: my_jitsi_app_id

  # Same as above but for storing secrets
  secretEnvs: 
    JIBRI_RECORDER_PASSWORD: 82f430da23d14eeb0081d09dc97d8c20
    JIBRI_XMPP_PASSWORD: 525335a43b440b1f2b318c18df9fb4b9
    JICOFO_AUTH_PASSWORD: 4832aeb9b3dab85406652f49a6111f00
    JIGASI_XMPP_PASSWORD: e65c5af98e252eaca2587a48a01316f6
    JVB_AUTH_PASSWORD: 84c7b6df5d50289d91d1e8785f218bf1
    JVB_STUN_SERVERS: meet-jit-si-turnrelay.jitsi.net:443

    # Any env contains secret should be placed here. Check docker repo for more env: https://github.com/jitsi/docker-jitsi-meet
    # JWT_APP_SECRET: my_jitsi_app_secret

  # Shard configs. To add new shard just duplicate this `shard-0` block and rename it to shard-1, shard-2,...
  shards:
    shard-0:
      # JVB specific shard configs
      xmppServer: # Overrides XMPP_SERVER for jvb on this shard
      jvbBasePort: 30000 # Base nodeport for jvb on this shard
      jvbWebsocketBasePorts: # Base nodeport to expose colibri websocket
        http:
        # admin:

      # Prosody specific shard configs
      prosodyNodePorts: # Assign nodeport to expose prosody
        bosh-insecure:
        xmpp-c2s:
        xmpp-component:
        # bosh-secure:
        # xmpp-s2s:

    shard-1:
    #   # JVB specific shard configs
       xmppServer: # Overrides XMPP_SERVER for jvb on this shard
       jvbBasePort: 31000 # Base nodeport for jvb on this shard
       jvbWebsocketBasePorts: # Base nodeport to expose colibri websocket
         http: 
    #     # admin:

    #   # Prosody specific shard configs
       prosodyNodePorts: # Assign nodeport to expose prosody
         bosh-insecure: 
         xmpp-c2s:
         xmpp-component:
    #     # bosh-secure:
    #     # xmpp-s2s:

# Jicofo specific configurations
jicofo:
  enabled: true
  image: 
    repository: jitsi/jicofo
    pullPolicy: IfNotPresent
    # Overrides the image tag whose default is the chart appVersion.
    tag: ""
  imagePullSecrets: []
  podAnnotations: {}

  resources: {}
    # limits:
    #   memory: 400Mi
    #   cpu: 400m
    # requests:
    #   memory: 400Mi
    #   cpu: 400m
  nodeSelector: {}
  tolerations: []
  affinity: {}

  # Allowed environment variables (include normal env and secret env) to pass to jicofo pod
  # Usually don't need to change this
  allowedEnv:
    - AUTH_TYPE
    - BRIDGE_AVG_PARTICIPANT_STRESS
    - BRIDGE_STRESS_THRESHOLD
    - ENABLE_AUTH
    - ENABLE_AUTO_OWNER
    - ENABLE_CODEC_VP8
    - ENABLE_CODEC_VP9
    - ENABLE_CODEC_H264
    - ENABLE_OCTO
    - ENABLE_RECORDING
    - ENABLE_SCTP
    - JICOFO_AUTH_USER
    - JICOFO_AUTH_PASSWORD
    - JICOFO_CONF_INITIAL_PARTICIPANT_WAIT_TIMEOUT
    - JICOFO_CONF_SINGLE_PARTICIPANT_TIMEOUT
    - JICOFO_SHORT_ID
    - JICOFO_RESERVATION_ENABLED 
    - JICOFO_RESERVATION_REST_BASE_URL 
    - JIBRI_BREWERY_MUC
    - JIBRI_REQUEST_RETRIES
    - JIBRI_PENDING_TIMEOUT
    - JIGASI_BREWERY_MUC
    - JIGASI_SIP_URI
    - JVB_BREWERY_MUC
    - MAX_BRIDGE_PARTICIPANTS
    - OCTO_BRIDGE_SELECTION_STRATEGY
    - TZ

# Prosody specific configurations
prosody:
  enabled: true
  image: 
    repository: jitsi/prosody
    pullPolicy: IfNotPresent
    # Overrides the image tag whose default is the chart appVersion.
    tag: ""
  imagePullSecrets: []
  podAnnotations: {}

  service:
    type: ClusterIP
    annotations: {}
    ports:
      bosh-insecure: 5280
      xmpp-c2s: 5222
      xmpp-component: 5347
      # xmpp-s2s: 5269
      # bosh-secure: 5281

  # Inject your custom files (configs, plugins,...) to prosody container 
  extraFiles: []
    # - name: some-plugin.lua
    #   mountPath: /prosody-plugins-custom/some-plugin.lua
    #   mode: 0400
    #   content: |
    #     module:set_global();

  resources: {}
    # limits:
    #   memory: 300Mi
    #   cpu: 300m
    # requests:
    #   memory: 300Mi
    #   cpu: 300m
  nodeSelector: {}
  tolerations: []
  affinity: {}

  # Allowed environment variables (include normal env and secret env) to pass to prosody pod
  # Usually don't need to change this
  allowedEnv:
    - AUTH_TYPE
    - ENABLE_AUTH
    - ENABLE_GUESTS
    - ENABLE_LOBBY
    - ENABLE_AV_MODERATION
    - ENABLE_XMPP_WEBSOCKET
    - GLOBAL_MODULES
    - GLOBAL_CONFIG
    - LDAP_URL
    - LDAP_BASE
    - LDAP_BINDDN
    - LDAP_BINDPW
    - LDAP_FILTER
    - LDAP_AUTH_METHOD
    - LDAP_VERSION
    - LDAP_USE_TLS
    - LDAP_TLS_CIPHERS
    - LDAP_TLS_CHECK_PEER
    - LDAP_TLS_CACERT_FILE
    - LDAP_TLS_CACERT_DIR
    - LDAP_START_TLS
    - XMPP_MODULES
    - XMPP_MUC_MODULES
    - XMPP_INTERNAL_MUC_MODULES
    - XMPP_CROSS_DOMAIN
    - JICOFO_COMPONENT_SECRET
    - JICOFO_AUTH_USER
    - JICOFO_AUTH_PASSWORD
    - JVB_AUTH_USER
    - JVB_AUTH_PASSWORD
    - JIGASI_XMPP_USER
    - JIGASI_XMPP_PASSWORD
    - JIBRI_XMPP_USER
    - JIBRI_XMPP_PASSWORD
    - JIBRI_RECORDER_USER
    - JIBRI_RECORDER_PASSWORD
    - JWT_APP_ID
    - JWT_APP_SECRET
    - JWT_ACCEPTED_ISSUERS
    - JWT_ACCEPTED_AUDIENCES
    - JWT_ASAP_KEYSERVER
    - JWT_ALLOW_EMPTY
    - JWT_AUTH_TYPE
    - JWT_TOKEN_AUTH_MODULE
    - LOG_LEVEL
    - PUBLIC_URL
    - TURN_CREDENTIALS
    - TURN_HOST
    - TURNS_HOST
    - TURN_PORT
    - TURNS_PORT
    - TZ

# Web specific configurations
web:
  enabled: true
  image: 
    repository: jitsi/web
    pullPolicy: IfNotPresent
    # Overrides the image tag whose default is the chart appVersion.
    tag: ""
  imagePullSecrets: []
  podAnnotations: {}

  service:
    type: ClusterIP
    annotations: {}
    port: 80
    # nodePort: 32080

  # Only work if you have 1 shard. In that case you can disable haproxy and expose web service directly
  ingress:
    enabled: false
    className: ""
    annotations: {}
      # kubernetes.io/ingress.class: nginx
      # kubernetes.io/tls-acme: "true"
    hosts:
      - host: socialhour.vewmet.com
        paths:
          - path: /
            pathType: ImplementationSpecific
    tls: []
    #  - secretName: chart-example-tls
    #    hosts:
    #      - socialhour.vewmet.com

  # Inject your custom files (configs, webpages,...) to web container 
  extraFiles: []
    # - name: plugin.head.html
    #   mountPath: /usr/share/jitsi-meet/plugin.head.html
    #   mode: 0400
    #   content: |
    #     <style>
    #     .welcome .welcome-watermark{position:absolute;width:100%;height:auto}
    #     #footer{margin-top:20px;margin-bottom:20px;font-size:14px}
    #     </style>

  resources: {}
    # limits:
    #   memory: 300Mi
    #   cpu: 400m
    # requests:
    #   memory: 300Mi
    #   cpu: 400m
  nodeSelector: {}
  tolerations: []
  affinity: {}

  # Allowed environment variables (include normal env and secret env) to pass to web pod
  # Usually don't need to change this
  allowedEnv:
    - ENABLE_COLIBRI_WEBSOCKET
    - ENABLE_FLOC
    - ENABLE_XMPP_WEBSOCKET
    - ENABLE_HTTP_REDIRECT
    - DISABLE_DEEP_LINKING
    - PUBLIC_URL
    - TZ
    - AMPLITUDE_ID
    - ANALYTICS_SCRIPT_URLS
    - ANALYTICS_WHITELISTED_EVENTS
    - CALLSTATS_CUSTOM_SCRIPT_URL
    - CALLSTATS_ID
    - CALLSTATS_SECRET
    - CHROME_EXTENSION_BANNER_JSON
    - CONFCODE_URL
    - CONFIG_EXTERNAL_CONNECT
    - DEFAULT_LANGUAGE
    - DEPLOYMENTINFO_ENVIRONMENT
    - DEPLOYMENTINFO_ENVIRONMENT_TYPE
    - DEPLOYMENTINFO_REGION
    - DEPLOYMENTINFO_USERREGION
    - DIALIN_NUMBERS_URL
    - DIALOUT_AUTH_URL
    - DIALOUT_CODES_URL
    - DROPBOX_APPKEY
    - DROPBOX_REDIRECT_URI
    - DYNAMIC_BRANDING_URL
    - ENABLE_AUDIO_PROCESSING
    - ENABLE_AUTH
    - ENABLE_CALENDAR
    - ENABLE_FILE_RECORDING_SERVICE
    - ENABLE_FILE_RECORDING_SERVICE_SHARING
    - ENABLE_GUESTS
    - ENABLE_LIPSYNC
    - ENABLE_NO_AUDIO_DETECTION
    - ENABLE_P2P
    - ENABLE_PREJOIN_PAGE
    - ENABLE_WELCOME_PAGE
    - ENABLE_CLOSE_PAGE
    - ENABLE_RECORDING
    - ENABLE_REMB
    - ENABLE_REQUIRE_DISPLAY_NAME
    - ENABLE_SIMULCAST
    - ENABLE_STATS_ID
    - ENABLE_STEREO
    - ENABLE_SUBDOMAINS
    - ENABLE_TALK_WHILE_MUTED
    - ENABLE_TCC
    - ENABLE_TRANSCRIPTIONS
    - ETHERPAD_PUBLIC_URL
    - ETHERPAD_URL_BASE
    - GOOGLE_ANALYTICS_ID
    - GOOGLE_API_APP_CLIENT_ID
    - INVITE_SERVICE_URL
    - JICOFO_AUTH_USER
    - MATOMO_ENDPOINT
    - MATOMO_SITE_ID
    - MICROSOFT_API_APP_CLIENT_ID
    - NGINX_RESOLVER
    - NGINX_WORKER_PROCESSES
    - NGINX_WORKER_CONNECTIONS
    - PEOPLE_SEARCH_URL
    - RESOLUTION
    - RESOLUTION_MIN
    - RESOLUTION_WIDTH
    - RESOLUTION_WIDTH_MIN
    - START_AUDIO_ONLY
    - START_AUDIO_MUTED
    - START_WITH_AUDIO_MUTED
    - START_SILENT
    - DISABLE_AUDIO_LEVELS
    - ENABLE_NOISY_MIC_DETECTION
    - START_BITRATE
    - DESKTOP_SHARING_FRAMERATE_MIN
    - DESKTOP_SHARING_FRAMERATE_MAX
    - START_VIDEO_MUTED
    - START_WITH_VIDEO_MUTED
    - TESTING_CAP_SCREENSHARE_BITRATE
    - TESTING_OCTO_PROBABILITY
    - TOKEN_AUTH_URL

# HAproxy specific configurations, used to load balance between shards
haproxy:
  enabled: true
  image: 
    repository: haproxy
    pullPolicy: IfNotPresent
    tag: ""
  imagePullSecrets: []
  podAnnotations: {}

  service:
    type: ClusterIP
    annotations: {}
    port: 80
    # nodePort: 32080

  # Expose this as your web domain
  ingress:
    enabled: true
    className: ""
    annotations: {}
      # kubernetes.io/ingress.class: nginx
      # kubernetes.io/tls-acme: z"true"
    hosts:
      - host: socialhour.vewmet.com
        paths:
          - path: /
            pathType: ImplementationSpecific
    tls:
     - secretName: socialhour-tls 
       hosts:
         - socialhour.vewmet.com

  resources: {}
    # limits:
    #   memory: 300Mi
    #   cpu: 400m
    # requests:
    #   memory: 300Mi
    #   cpu: 400m
  nodeSelector: {}
  tolerations: []
  affinity: {}

# JVB specific configurations
jvb:
  enabled: true

  # Number of JVB for each shard if you don't want autoscaling
  replicaCount: 1

  image:
    repository: jitsi/jvb
    pullPolicy: IfNotPresent
    # Overrides the image tag whose default is the chart appVersion.
    tag: ""

  imagePullSecrets: []

  # JVB statefulset annotations, used to interact with metacontroller if you like
  statefulSetAnnotations: {}
    # service-per-pod-label: "statefulset.kubernetes.io/pod-name"

  podAnnotations: {}

  # Expose directly udp port as NodePort.
  udp:
    service:
      enabled: true
      annotations: {}

  # Expose websocket
  websocket:
    service:
      enabled: true
      type: ClusterIP
      annotations: {}
      ports:
        http: 9090
        # admin: 8080

    # Enable this ingress to auto expose colibri websocket and load balance between jvbs
    ingress:
      enabled: true
      className: ""
      annotations: {}
        # kubernetes.io/ingress.class: nginx
        # kubernetes.io/tls-acme: "true"
      hosts:
        - ws.socialhour.vewmet.com
      tls:
       - secretName: socialhour-tls
         hosts:
           - ws.socialhour.vewmet.com

  resources: {}
    # requests:
    #   cpu: "1500m"
    #   memory: "1000Mi"
    # limits:
    #   cpu: "8000m"
    #   memory: "1000Mi"

  # JVB autoscaling config. Checkout README.md for more detail on how each JVB's NodePort is calculated
  autoscaling:
    enabled: true
    minReplicas: 1
    maxReplicas: 10
    targetCPUUtilizationPercentage: 90
    # targetMemoryUtilizationPercentage: 80

  nodeSelector: {}

  tolerations: []

  affinity: {}

  # Time to wait for JVB to gracefully stop. It depends on your room duration.
  terminationGracePeriodSeconds: 3600

  # Allowed environment variables (include normal env and secret env) to pass to web pod
  # Usually don't need to change this
  allowedEnv:
    - ENABLE_COLIBRI_WEBSOCKET
    - ENABLE_OCTO
    - JVB_AUTH_USER
    - JVB_AUTH_PASSWORD
    - JVB_BREWERY_MUC
    - JVB_STUN_SERVERS
    - JVB_ENABLE_APIS
    - JVB_WS_DOMAIN
    - PUBLIC_URL
    - JVB_OCTO_BIND_ADDRESS
    - JVB_OCTO_PUBLIC_ADDRESS
    - JVB_OCTO_BIND_PORT
    - JVB_OCTO_REGION
    - TZ

  # Enable prometheus exporter sidecar for JVBs
  monitoring:
    enabled: true
    image:
      repository: systemli/prometheus-jitsi-meet-exporter
      pullPolicy: IfNotPresent
      tag: "1.1.6"
    resources: {}
      # requests:
      #   cpu: "100m"
      #   memory: "100Mi"
      # limits:
      #   cpu: "100m"
      #   memory: "100Mi"
bharath-naik commented 2 years ago

Also i am unable to make the web live on my subdomain.

minhpq331 commented 2 years ago

Hello @bharath-naik, could your give me more detail about your cluster's current state? List pod, list service, list ingress,... any logs or descriptions on error/failed resource will help.

bharath-naik commented 2 years ago

Thanks @minhpq331 ... i am collecting the logs meanwhile iam also screensharing my live cluster, if its convenient you can join https://meet.jit.si/jitsi_scalable_helm

bharath-naik commented 2 years ago

list of pods, services, ingress that are running

image

minhpq331 commented 2 years ago

@bharath-naik As far as I can see, your pods and your services are working as expected but your ingresses are not. You need to add some specific annotations to get GCE builtin ingresses working. Some configurations to make this chart work with GKE and GCE ingress:

Custom-values.yaml (I only list important parts)

jvb:
  websocket:
    service:
      annotations: 
        beta.cloud.google.com/backend-config: '{"ports": {"9090":"jvb-websocket-config"}}'

haproxy:
  service:
    annotations: 
      beta.cloud.google.com/backend-config: '{"ports": {"80":"haproxy-config"}}'
  ingress:
    hosts:
      - host: socialhour.vewmet.com
        paths:
          - path: /*  # THIS IS IMPORTANT, note this '*'
            pathType: ImplementationSpecific

GCE backend config (kubectl apply this file)

apiVersion: cloud.google.com/v1beta1
kind: BackendConfig
metadata:
  name: haproxy-config
spec:
  timeoutSec: 2100 # timeout connection for haproxy, set your
  connectionDraining:
    drainingTimeoutSec: 2100
  healthCheck:
    checkIntervalSec: 15
    port: 7880
    type: HTTP
    requestPath: /
---
apiVersion: cloud.google.com/v1beta1
kind: BackendConfig
metadata:
  name: jvb-websocket-config
spec:
  timeoutSec: 2100 # 35 minutes  # timeout connection for haproxy, set your
  connectionDraining:
    drainingTimeoutSec: 2100
  healthCheck:
    checkIntervalSec: 15
    port: 8080
    type: HTTP
    requestPath: /about/health

Please point your websocket domain (ws.socialhour.vewmet.com) to your ingress public IP. Please check google cloud console to make sure this helm chart create 2 ingresses with 2 public IP.

bharath-naik commented 2 years ago

image

minhpq331 commented 2 years ago

@bharath-naik yeah, it created 2 ingresses successfully. Make sure you point DNS (ws.socialhour.vewmet.com) to 34.149.245.82 and change haproxy ingress path (from / to /*), your web doesn't load because it can't download assets (which refer to ingress path / and /*)

nslookup ws.socialhour.vewmet.com                                                      
Server:     127.0.0.53
Address:    127.0.0.53#53

** server can't find ws.socialhour.vewmet.com: NXDOMAIN
bharath-naik commented 2 years ago

now, should i run the above yaml code or should i simply add /* in ingress at line 410

nslookup ws.socialhour.vewmet.com
Server:  UnKnown
Address:  2405:200:800::1

Non-authoritative answer:
Name:    ws.socialhour.vewmet.com
Address:  34.149.245.82
minhpq331 commented 2 years ago

@bharath-naik simply add /*. The backend configs i've shown above is just timeout configs for GCE LB. You can config it later.

bharath-naik commented 2 years ago
# HAproxy specific configurations, used to load balance between shards
haproxy:
  enabled: true
  image: 
    repository: haproxy
    pullPolicy: IfNotPresent
    tag: ""
  imagePullSecrets: []
  podAnnotations: {}

  service:
    type: ClusterIP
    annotations: {}
    port: 80
    # nodePort: 32080

  # Expose this as your web domain
  ingress:
    enabled: true
    className: ""
    annotations: {}
      # kubernetes.io/ingress.class: nginx
      # kubernetes.io/tls-acme: z"true"
    hosts:
      - host: socialhour.vewmet.com
        paths:
          - path: /*  ###changed this line only 
            pathType: ImplementationSpecific
    tls:
     - secretName: socialhour-tls 
       hosts:
         - socialhour.vewmet.com

can't access my https://socialhour.vewmet.com yet

bharath-naik commented 2 years ago

if it was a quick fix.. can you come over here like 10 min https://meet.jit.si/jitsi_scalable_helm. it will help us

minhpq331 commented 2 years ago

@bharath-naik It works now. GCE ingresses and LBs are very slow to update =)))

bharath-naik commented 2 years ago

hosts:

  • host: socialhour.vewmet.com paths:
    • path: /* ###changed this line only pathType: ImplementationSpecific

i just changed the path here in values-custom.yaml against HAproxy configuration

bharath-naik commented 2 years ago

@bharath-naik It works now. GCE ingresses and LBs are very slow to update =)))

yes, it was live but unable to join the room 😔

bharath-naik commented 2 years ago

image

bharath-naik commented 2 years ago

wait.. it worked .. thanks @minhpq331 .. you are awesome

minhpq331 commented 2 years ago

GCE LBs will disconnect your ws connection after 30s. That's why I gave you timeout configs above =))) Please read more about GCE LBs backend configurations

bharath-naik commented 2 years ago

what should i do if i want to work this setup on multi clusters

bharath-naik commented 2 years ago
Logger.js:154 

       2021-12-08T05:17:03.622Z [JitsiMeetJS.js] <Object.getGlobalOnErrorHandler>:  UnhandledError: Not connected Script: null Line: null Column: null StackTrace:  Error: Not connected
    at u.send (https://socialhour.vewmet.com/libs/lib-jitsi-meet.min.js?v=5211:1:168272)
    at b.doLeave (https://socialhour.vewmet.com/libs/lib-jitsi-meet.min.js?v=5211:10:142415)
    at https://socialhour.vewmet.com/libs/lib-jitsi-meet.min.js?v=5211:10:160099
    at new Promise (<anonymous>)
    at b.leave (https://socialhour.vewmet.com/libs/lib-jitsi-meet.min.js?v=5211:10:159833)
    at ue.leave (https://socialhour.vewmet.com/libs/lib-jitsi-meet.min.js?v=5211:10:39901)
    at a.re (https://socialhour.vewmet.com/libs/app.bundle.min.js?v=5211:193:3690)
    at a.emit (https://socialhour.vewmet.com/libs/lib-jitsi-meet.min.js?v=5211:1:119783)
    at D.connectionHandler (https://socialhour.vewmet.com/libs/lib-jitsi-meet.min.js?v=5211:1:130683)
    at u._stropheConnectionCb (https://socialhour.vewmet.com/libs/lib-jitsi-meet.min.js?v=5211:1:166120)
    at w.Connection._changeConnectStatus (https://socialhour.vewmet.com/libs/lib-jitsi-meet.min.js?v=5211:1:34112)
    at w.Connection._doDisconnect (https://socialhour.vewmet.com/libs/lib-jitsi-meet.min.js?v=5211:1:34690)
    at r._interceptDoDisconnect (https://socialhour.vewmet.com/libs/lib-jitsi-meet.min.js?v=5211:25:7708)
    at r._handleResumeFailed (https://socialhour.vewmet.com/libs/lib-jitsi-meet.min.js?v=5211:25:10120)
    at w.Handler.run (https://socialhour.vewmet.com/libs/lib-jitsi-meet.min.js?v=5211:1:27486)
    at https://socialhour.vewmet.com/libs/lib-jitsi-meet.min.js?v=5211:1:35924
    at Object.forEachChild (https://socialhour.vewmet.com/libs/lib-jitsi-meet.min.js?v=5211:1:19148)
    at w.Connection._dataRecv (https://socialhour.vewmet.com/libs/lib-jitsi-meet.min.js?v=5211:1:35773)
    at N.Websocket._onMessage (https://socialhour.vewmet.com/libs/lib-jitsi-meet.min.js?v=5211:1:65011)
o @ Logger.js:154
getGlobalOnErrorHandler @ JitsiMeetJS.js:545
window.onunhandledrejection @ middleware.js:127
XmppConnection.js:476 

       Uncaught (in promise) Error: Not connected
    at u.send (XmppConnection.js:476)
    at b.doLeave (ChatRoom.js:287)
    at ChatRoom.js:1839
    at new Promise (<anonymous>)
    at b.leave (ChatRoom.js:1817)
    at ue.leave (JitsiConference.js:647)
    at a.re (conference.js:441)
    at a.emit (events.js:157)
    at D.connectionHandler (xmpp.js:383)
    at u._stropheConnectionCb (XmppConnection.js:295)
    at w.Connection._changeConnectStatus (strophe.umd.js:3011)
    at w.Connection._doDisconnect (strophe.umd.js:3052)
    at r._interceptDoDisconnect (strophe.stream-management.js:218)
    at r._handleResumeFailed (strophe.stream-management.js:325)
    at w.Handler.run (strophe.umd.js:1875)
    at strophe.umd.js:3157
    at Object.forEachChild (strophe.umd.js:830)
    at w.Connection._dataRecv (strophe.umd.js:3146)
    at N.Websocket._onMessage (strophe.umd.js:5836)

i have deployed this gce backend .. but the meetings get's exited after few minutes of connection

apiVersion: cloud.google.com/v1beta1
kind: BackendConfig
metadata:
  name: haproxy-config
spec:
  timeoutSec: 2100 # timeout connection for haproxy, set your
  connectionDraining:
    drainingTimeoutSec: 2100
  healthCheck:
    checkIntervalSec: 15
    port: 7880
    type: HTTP
    requestPath: /
---
apiVersion: cloud.google.com/v1beta1
kind: BackendConfig
metadata:
  name: jvb-websocket-config
spec:
  timeoutSec: 2100 # 35 minutes  # timeout connection for haproxy, set your
  connectionDraining:
    drainingTimeoutSec: 2100
  healthCheck:
    checkIntervalSec: 15
    port: 8080
    type: HTTP
    requestPath: /about/health
bharath-naik commented 2 years ago

please find attached screenshot if it was a problem image

VewMet commented 2 years ago

This issue can be closed. Websocket disconnecting after sometime may be opened as separate issue later @bharath-naik