Closed Snozzberries closed 1 year ago
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
/lifecycle stale
/remove-lifecycle stale
Apologies for letting this sit so long. This is one of those "It works for me" kind of issues, so I'll need more information to help debug what is going on.
First, are you still getting the error?
Second, if you are, could you post the output from a kubectl describe
of the pod in crashloop?
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
/lifecycle stale
I haven't had a chance to retest.
/close
@Snozzberries: Closing this issue.
Is this a BUG REPORT or FEATURE REQUEST?:
What happened: When using
kubectl start vmtest
the vm entersCrashLoopBackOff
state when using the latest vm.yaml manifest from the labs. https://github.com/kubevirt/kubevirt.github.io/blob/main/labs/manifests/vm.yamlWhat you expected to happen: The vm to enter the
Running
state.Anything else we need to know?: Utilizing the prior vm.yaml manifest when the container image references kubevirt the vm enters the
Running
state. https://github.com/kubevirt/kubevirt.github.io/blob/ac42c70e07899d1d3feeace459f8216e8200e389/labs/manifests/vm.yaml