Open kherath17 opened 5 months ago
@kherath17, thank you for creating this issue. We will troubleshoot it as soon as we can.
Triage this issue by using labels.
If information is missing, add a helpful comment and then I-issue-template
label.
If the issue is a question, add the I-question
label.
If the issue is valid but there is no time to troubleshoot it, consider adding the help wanted
label.
If the issue requires changes or fixes from an external project (e.g., ChromeDriver, GeckoDriver, MSEdgeDriver, W3C),
add the applicable G-*
label, and it will provide the correct link and auto-close the
issue.
After troubleshooting the issue, please add the R-awaiting answer
label.
Thank you!
Hi @kherath17, in this case, can you add env var SE_LOG_LEVEL=FINE
to both Hub and Node. Then execute the cURL command to drain node and capture logs to see which events that trigger it respawns
@VietND96 Log files attached (fyi its the node deletion Im triggering not node drain)
btw do you think triggering the node deletion through a curl command would not cause the k8 pod to be deleted? since the pod is on Kubernetes level and node is something that we engage with Selenium Grid level.
Expectation : What I am trying to achieve here is to build a custom script that hits the /status endpoint and check if session is null or not for each element in node array, if its null will be concluding that this node is not currently active, hence will trigger node deletion straightaway without any draining commands to fasten the scaling down process
btw do you think triggering the node deletion through a curl command would not cause the k8 pod to be deleted? since the pod is on Kubernetes level and node is something that we engage with Selenium Grid level.
If the Node is a deployment type, I guess it could not be deleted by the Selenium API endpoint. Because K8s deployment with restartPolicy: always
, it guards the number of replicas. Whenever the process in the container stops, pod 0/1
, K8s will restart the container and wait it up again. Whenever a node is up, it will again send the event to Hub for registration.
Even you tried to update restartPolicy: never
, K8s will raise the error that deployment not support that restart policy.
I think if you want to send a delete signal to Hub and Node will be terminated, the Node should be deployed as Job
@VietND96 actually the nodes are of POD type no replica counts specified or maintained, deleting the pod via kubectl command is working as expected, the only issue is deleting the node via specified curl command which deletes the node but then creates a new node within the same browser pod which results in below log lines
05:58:39.364 INFO [NodeServer.lambda$createHandlers$2] - Node has been added 06:25:39.367 INFO [NodeServer.lambda$createHandlers$2] - Node has been added 06:26:09.364 INFO [NodeServer.lambda$createHandlers$2] - Node has been added 06:27:39.363 INFO [NodeServer.lambda$createHandlers$2] - Node has been added
What happened?
Context:
I'm trying to delete the node attach to the grid via below curl command
cURL --request DELETE 'http://<Grid Endpoint>/se/grid/distributor/node/<Node-ID>' --header 'X-REGISTRATION-SECRET;'
above command gets executed successfully and the specific node disappears from the Grid UI as well from the below response for few seconds but again, gets created after a while.
cURL GET 'https://<Grid Endpoint>/sandbox_qlabv2/status'
When checked on the pod log show as below every node deleting triggered
Alternatively
Is there any way to identify to which Selenium Grid node id is the current browser pod created is mapped to?
Command used to start Selenium Grid with Docker (or Kubernetes)
Relevant log output
Operating System
Kubernetes - EKS
Docker Selenium version (image tag)
4.17.0
Selenium Grid chart version (chart version)
NA