Closed alramostpt closed 1 week ago
@alramostpt, thank you for creating this issue. We will troubleshoot it as soon as we can.
Triage this issue by using labels.
If information is missing, add a helpful comment and then I-issue-template
label.
If the issue is a question, add the I-question
label.
If the issue is valid but there is no time to troubleshoot it, consider adding the help wanted
label.
If the issue requires changes or fixes from an external project (e.g., ChromeDriver, GeckoDriver, MSEdgeDriver, W3C),
add the applicable G-*
label, and it will provide the correct link and auto-close the
issue.
After troubleshooting the issue, please add the R-awaiting answer
label.
Thank you!
Are you using helm chart to deploy or your created YAML? Can you share YAML to debug further
I'm not using Helm, Just a straight forward K8s deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: sel-hub
name: sel-hub
namespace: selenium
spec:
replicas: 1
selector:
matchLabels:
app: sel-hub
template:
metadata:
labels:
app: sel-hub
spec:
containers:
- env:
- name: SE_OPTS
value: --username {REDACTED} --password {REDACTED}
image: selenium/hub:latest
imagePullPolicy: Always
name: sel-hub
ports:
- containerPort: 4442
name: p4442
protocol: TCP
- containerPort: 4443
name: p4443
protocol: TCP
- containerPort: 4444
name: p4444
protocol: TCP
restartPolicy: Always
apiVersion: v1
kind: Service
metadata:
name: sel-hub-svc
namespace: selenium
spec:
ports:
- name: http
port: 80
protocol: TCP
targetPort: 4444
- name: http2
port: 4444
protocol: TCP
targetPort: 4444
- name: publish
port: 4442
protocol: TCP
targetPort: 4442
- name: subcribe
port: 4443
protocol: TCP
targetPort: 4443
selector:
app: sel-hub
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
cert-manager.io/cluster-issuer: {REDACTED}
kubernetes.io/tls-acme: "true"
nginx.ingress.kubernetes.io/proxy-connect-timeout: "3600"
nginx.ingress.kubernetes.io/proxy-read-timeout: "3600"
nginx.ingress.kubernetes.io/proxy-send-timeout: "3600"
name: sel-hub-ingress
namespace: selenium
spec:
ingressClassName: nginx-{REDACTED}
rules:
- host: {REDACTED}
http:
paths:
- backend:
service:
name: sel-hub-svc
port:
number: 80
path: /
pathType: Prefix
tls:
- hosts:
- {REDACTED}
secretName: {REDACTED}
Ok, so I tried a few more things. So adding this annotation to the ingress seems to have made the trick for me: nginx.org/websocket-services: {service-name}.
Leaving this here is case it helps others.
This issue can be considered resolved. Thanks for your time.
What is the value set to env var SE_NODE_GRID_URL
in Node config? Since this is used to construct the URL for other WebSocket
By the way, which kind of ingress service is used in your case? NGINX controller or something else?
What is the value set to env var
SE_NODE_GRID_URL
in Node config? Since this is used to construct the URL for other WebSocket
Whatever is the default value of it since I'm not setting that on my side.
By the way, which kind of ingress service is used in your case? NGINX controller or something else?
Yes, I'm using NGINX as ingress controller. There is a hint of that in my ingress definition. Sorry that I did not stated that directly.
Resolved by following comment https://github.com/SeleniumHQ/docker-selenium/issues/2288#issuecomment-2189377116
What happened?
When I try to connect to the novnc in a hub-node setup in Kubernetes, I get a wss error where the hub complains about not receiving any bytes in the headers to parse. When I do a port-forward to the hub I'm able to use the novnc.
Command used to start Selenium Grid with Docker (or Kubernetes)
Relevant log output
Operating System
AWS Linux
Docker Selenium version (image tag)
n/a
Selenium Grid chart version (chart version)
Selenium Grid 4.22.0 (revision c5f3146703)