Closed sharnoff closed 2 weeks ago
Local development (not staging/prod because we have RestartPolicy = Always there)
Using vm-deploy.yaml, set spec.restartPolicy: Never and then kill -6 1 inside the runner pod (kill -9 doesn't work).
vm-deploy.yaml
spec.restartPolicy: Never
kill -6 1
kill -9
diff --git a/vm-deploy.yaml b/vm-deploy.yaml index 09588f6..7d84d67 100644 --- a/vm-deploy.yaml +++ b/vm-deploy.yaml @@ -13,6 +13,7 @@ metadata: spec: schedulerName: autoscale-scheduler enableSSH: true + restartPolicy: Never guest: cpus: { min: 0.25, use: 0.25, max: 1.25 } memorySlotSize: 1Gi
kubectl exec postgres16-disk-test-<SUFFIX> -- kill -6 1
The pod name should either:
""
There are good arguments for both cases here, I think.
If you kubectl get -w neonvm while using the reproduction steps above, you'll see a rapid stream of changes that looks something like:
kubectl get -w neonvm
$ kubectl get -w neonvm NAME CPUS MEMORY POD EXTRAIP STATUS RESTARTS AGE postgres16-disk-test 250m 1Gi postgres16-disk-test-j5sb8 Running 14s postgres16-disk-test 250m 1Gi postgres16-disk-test-j5sb8 Failed 38s postgres16-disk-test Failed 38s postgres16-disk-test postgres16-disk-test-mpl6w Failed 38s postgres16-disk-test Failed 38s postgres16-disk-test postgres16-disk-test-pdh8f Failed 38s postgres16-disk-test Failed 38s postgres16-disk-test postgres16-disk-test-zzzm6 Failed 38s postgres16-disk-test Failed 38s postgres16-disk-test postgres16-disk-test-9qfcm Failed 38s postgres16-disk-test Failed 38s postgres16-disk-test postgres16-disk-test-nq4vf Failed 38s postgres16-disk-test Failed 38s postgres16-disk-test postgres16-disk-test-dwml5 Failed 38s postgres16-disk-test Failed 38s postgres16-disk-test postgres16-disk-test-r78fc Failed 38s postgres16-disk-test Failed 38s postgres16-disk-test postgres16-disk-test-965jp Failed 38s postgres16-disk-test Failed 38s postgres16-disk-test postgres16-disk-test-fqmxd Failed 38s postgres16-disk-test Failed 38s postgres16-disk-test postgres16-disk-test-zgnrv Failed 38s postgres16-disk-test Failed 38s postgres16-disk-test postgres16-disk-test-bcpw7 Failed 38s postgres16-disk-test Failed 38s postgres16-disk-test postgres16-disk-test-ck9kz Failed 38s postgres16-disk-test Failed 38s postgres16-disk-test postgres16-disk-test-m9n97 Failed 38s postgres16-disk-test Failed 38s postgres16-disk-test postgres16-disk-test-h46sz Failed 38s postgres16-disk-test Failed 38s postgres16-disk-test postgres16-disk-test-x49h7 Failed 38s postgres16-disk-test Failed 38s postgres16-disk-test postgres16-disk-test-dqhss Failed 38s postgres16-disk-test Failed 38s postgres16-disk-test postgres16-disk-test-mc6xm Failed 38s postgres16-disk-test Failed 38s postgres16-disk-test postgres16-disk-test-zh682 Failed 38s postgres16-disk-test Failed 38s postgres16-disk-test postgres16-disk-test-r87cv Failed 38s postgres16-disk-test Failed 38s postgres16-disk-test postgres16-disk-test-ldmhd Failed 38s postgres16-disk-test Failed 38s postgres16-disk-test postgres16-disk-test-mqfrb Failed 38s postgres16-disk-test Failed 38s postgres16-disk-test postgres16-disk-test-gs8pc Failed 38s postgres16-disk-test Failed 38s postgres16-disk-test postgres16-disk-test-8twbx Failed 38s
Environment
Local development (not staging/prod because we have RestartPolicy = Always there)
Steps to reproduce
Using
vm-deploy.yaml
, setspec.restartPolicy: Never
and thenkill -6 1
inside the runner pod (kill -9
doesn't work).Expected result
The pod name should either:
""
until the VM is deletedThere are good arguments for both cases here, I think.
Actual result
If you
kubectl get -w neonvm
while using the reproduction steps above, you'll see a rapid stream of changes that looks something like:Other logs, links