kubevirt / kubevirt.github.io

KubeVirt website repo, documentation at https://kubevirt.io/user-guide/
https://kubevirt.io
MIT License
30 stars 112 forks source link

CI issue .... kubevirt-io-presubmit-markdown-linter img is missing podman #799

Closed mazzystr closed 2 years ago

mazzystr commented 2 years ago

/kind bug

What happened: CI job kubevirt-io-presubmit-markdown-linter is failing

Log from kubevirt-io-presubmit-markdown-linter

Makefile: Linting Markdown files using quay.io/tauerbec/markdownlint-cli:latest podman run -it --rm -v /home/prow/go/src/github.com/kubevirt/kubevirt.github.io:/src:ro --workdir /src quay.io/tauerbec/markdownlint-cli:latest /.md /bin/sh: podman: command not found make: [Makefile:226: check_lint] Error 127

mazzystr commented 2 years ago

Job has been removed from CI so new pull reqs can get unblocked

mazzystr commented 2 years ago

See https://github.com/kubevirt/project-infra/pull/1642

mazzystr commented 2 years ago

Let's try to use the image that is built from make build_img. That img is packed with everything needed to run all the make targets. It will need to be published to quay.io / kubevirt user. Please work with @dhiller

tylerauerbeck commented 2 years ago

@mazzystr Not sure that this has anything to do with either image. When I'm testing this locally on a fresh VM that I've spun up (to avoid any "work on my machine" problems), I'm seeing a similar issue. But (for both check_lint and even build_img), I'm seeing a similar problem.

I believe this is coming from the defaulting of CONTAINER_ENGINE and BUILD_ENGINE. Basically if it's not set, it defaults to podman. So to fix this we can either export CONTAINER_ENGINE=docker or switch the default behavior.

I +1 the publishing of that image though. Would be good to just be able to use it versus having to have to build the image each time we want to do one of the other actions.

kubevirt-bot commented 2 years ago

Issues go stale after 90d of inactivity. Mark the issue as fresh with /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

/lifecycle stale

kubevirt-bot commented 2 years ago

Stale issues rot after 30d of inactivity. Mark the issue as fresh with /remove-lifecycle rotten. Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

/lifecycle rotten

kubevirt-bot commented 2 years ago

Rotten issues close after 30d of inactivity. Reopen the issue with /reopen. Mark the issue as fresh with /remove-lifecycle rotten.

/close

kubevirt-bot commented 2 years ago

@kubevirt-bot: Closing this issue.

In response to [this](https://github.com/kubevirt/kubevirt.github.io/issues/799#issuecomment-1069803926): >Rotten issues close after 30d of inactivity. >Reopen the issue with `/reopen`. >Mark the issue as fresh with `/remove-lifecycle rotten`. > >/close Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes/test-infra](https://github.com/kubernetes/test-infra/issues/new?title=Prow%20issue:) repository.