kubevirt / kubevirt.github.io

KubeVirt website repo, documentation at https://kubevirt.io/user-guide/
https://kubevirt.io
MIT License
29 stars 110 forks source link

Issue with quay.io image #846

Closed Snozzberries closed 1 year ago

Snozzberries commented 2 years ago

Is this a BUG REPORT or FEATURE REQUEST?:

Uncomment only one, leave it on its own line:

/kind bug /kind enhancement

What happened: When using kubectl start vmtest the vm enters CrashLoopBackOff state when using the latest vm.yaml manifest from the labs. https://github.com/kubevirt/kubevirt.github.io/blob/main/labs/manifests/vm.yaml

What you expected to happen: The vm to enter the Running state.

Anything else we need to know?: Utilizing the prior vm.yaml manifest when the container image references kubevirt the vm enters the Running state. https://github.com/kubevirt/kubevirt.github.io/blob/ac42c70e07899d1d3feeace459f8216e8200e389/labs/manifests/vm.yaml

URL where the problem can be found ... If the issue is with a lab, please provide information about your environment, platform, versions, ...

> uname -s -r -v -p -o
Linux 5.10.102.1-microsoft-standard-WSL2 #1 SMP Wed Mar 2 00:30:59 UTC 2022 x86_64 GNU/Linux

> docker -v
Docker version 20.10.16, build aa7e414

> minikube version
minikube version: v1.25.2
commit: 362d5fdc0a3dbee389b3d3f1034e8023e72bd3a7

> virtctl version
Client Version: version.Info{GitVersion:"v0.53.1", GitCommit:"d9488bdf5a13dd20bff9ca32bb182112ca16c0ee", GitTreeState:"clean", BuildDate:"2022-05-17T15:01:25Z", GoVersion:"go1.17.8", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{GitVersion:"v0.53.1", GitCommit:"d9488bdf5a13dd20bff9ca32bb182112ca16c0ee", GitTreeState:"clean", BuildDate:"2022-05-17T15:01:25Z", GoVersion:"go1.17.8", Compiler:"gc", Platform:"linux/amd64"}
kubevirt-bot commented 1 year ago

Issues go stale after 90d of inactivity. Mark the issue as fresh with /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

/lifecycle stale

cwilkers commented 1 year ago

/remove-lifecycle stale

Apologies for letting this sit so long. This is one of those "It works for me" kind of issues, so I'll need more information to help debug what is going on.

First, are you still getting the error?

Second, if you are, could you post the output from a kubectl describe of the pod in crashloop?

kubevirt-bot commented 1 year ago

Issues go stale after 90d of inactivity. Mark the issue as fresh with /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

/lifecycle stale

Snozzberries commented 1 year ago

I haven't had a chance to retest.

/close

kubevirt-bot commented 1 year ago

@Snozzberries: Closing this issue.

In response to [this](https://github.com/kubevirt/kubevirt.github.io/issues/846#issuecomment-1352299648): >I haven't had a chance to retest. > >/close Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes/test-infra](https://github.com/kubernetes/test-infra/issues/new?title=Prow%20issue:) repository.