Open ebran opened 2 years ago
Hi @ebran, if you are debugging without isolation, then the restoration job should undo all changes to your cluster within a few minutes after the end of the debugging session. Even if you are cancelling the task manually, the restoration job will eventually (after 15 minutes or so) realize it needs to clean up. If you are debugging with isolation, the routing manager is left running in the cluster, but this can be deleted with "kubectl delete deploy -n routing-manager". Are you seeing some other behavior (i.e. a failure of the restoration job)?
I'll also add, I believe if you have a terminal running the Bridge task, typing "exit" should be sufficient for terminating the Bridge task cleanly. However this may depend on your setup
Hey guys just ran into this one - for some reason my cleanup job tried to run whilst i was still debugging, and the job failed and couldn't be restarted. I tried manually deleting it and replacing it with itself to force it to re-run, but the restoration job seems to be unable to run correctly. (see https://serverfault.com/a/888819)
Short of a redeploy of the cluster services from my side, is there a way to make the job work after i've closed the session already?
So i've found that just updating the image back to the original using a kubectl edit <deployment>
seems to restore the functionality, as a hack. I can see there are some additional entries for the Kube Bridge that i have ignored.
If there is a full list of changes to make/undo, that would help us manually restore the services and make sure nothing else needs changing.
I would also love some docs about this. I just ran into a problem, where stopping the bridge to kubernetes connection did not clear the session. The pod of the bridged service was still using the bridge to kubernetes dockerimage, so it clearly did not work regarding the restoration job, although the restoration job ran successfully.
Following on this as well... ended up having to do a helm upgrade to get everything back to normal.
I have been using Bridge to Kubernetes (BTK) in isolation mode with "Configure Bridge to Kubernetes without a launch configuration" in VS Code. I can then connect to my cluster with BTK as a task, and once connected I can run my code manually in isolation. This works nicely.
However, when I want to quit, I am forced to terminate the BTK task manually so that all pods, services, jobs, and whatever other manipulations BTK uses to work, are left on the cluster.
Is there a command or workaround or recipe to undo/reset the changes imposed by BTK on the cluster after-the-fact? I haven't found much in the docs.
Best,