Closed dougsland closed 2 years ago
[APPROVALNOTIFIER] This PR is NOT APPROVED
This pull-request has been approved by: dougsland
To complete the pull request process, please assign aravindhp after the PR has been reviewed.
You can assign the PR to them by writing /assign @aravindhp
in a comment when ready.
The full list of commands accepted by this bot can be found here.
/cc @aravind
@dougsland: GitHub didn't allow me to request PR reviews from the following users: aravindh.
Note that only kubernetes-sigs members and repo collaborators can review this PR, and authors cannot review their own PRs.
/cc @aravindhp
Using the current CI image version: ci/v1.22.6-rc.0.55+487eef699dc8ad
the sig-windows-dev-tools deploy worked.
/cc @knabben
@dougsland what error is this PR fixing? is master broke? can you provide more details.
From my understanding this is the same as ci/latest
.
@dougsland what error is this PR fixing? is master broke? can you provide more details. From my understanding this is the same as
ci/latest
.
Great question, from just a recent test I got version gcr.io/k8s-staging-ci-images/kube-apiserver:v1.21.8
which for some reason is not available for download and break the build:
diff --git a/sync/shared/variables.yaml b/sync/shared/variables.yaml
index b9110d9..d6c52c1 100644
--- a/sync/shared/variables.yaml
+++ b/sync/shared/variables.yaml
@@ -14,7 +14,7 @@ windows_node_ip: "10.20.30.11"
# from https://apt.kubernetes.io/
k8s_linux_registry: "gcr.io/k8s-staging-ci-images"
k8s_linux_kubelet_deb: "1.21.0"
-k8s_linux_apiserver: "ci/v1.22.0-alpha.3.31+a3abd06ad53b2f"
+k8s_linux_apiserver: "ci/latest"
and the build just broken
controlplane: [preflight] Some fatal errors occurred:
controlplane: [ERROR ImagePull]: failed to pull image gcr.io/k8s-staging-ci-images/kube-apiserver:v1.21.8: output: time="2022-01-17T13:40:21Z" level=fatal msg="pulling image: rpc error: code = NotFound desc = failed to pull and unpack image \"gcr.io/k8s-staging-ci-images/kube-apiserver:v1.21.8\": failed to resolve reference \"gcr.io/k8s-staging-ci-images/kube-apiserver:v1.21.8\": gcr.io/k8s-staging-ci-images/kube-apiserver:v1.21.8: not found"
controlplane: , error: exit status 1
controlplane: [ERROR ImagePull]: failed to pull image gcr.io/k8s-staging-ci-images/kube-controller-manager:v1.21.8: output: time="2022-01-17T13:40:22Z" level=fatal msg="pulling image: rpc error: code = NotFound desc = failed to pull and unpack image \"gcr.io/k8s-staging-ci-images/kube-controller-manager:v1.21.8\": failed to resolve reference \"gcr.io/k8s-staging-ci-images/kube-controller-manager:v1.21.8\": gcr.io/k8s-staging-ci-images/kube-controller-manager:v1.21.8: not found"
controlplane: , error: exit status 1
controlplane: [ERROR ImagePull]: failed to pull image gcr.io/k8s-staging-ci-images/kube-scheduler:v1.21.8: output: time="2022-01-17T13:40:24Z" level=fatal msg="pulling image: rpc error: code = NotFound desc = failed to pull and unpack image \"gcr.io/k8s-staging-ci-images/kube-scheduler:v1.21.8\": failed to resolve reference \"gcr.io/k8s-staging-ci-images/kube-scheduler:v1.21.8\": gcr.io/k8s-staging-ci-images/kube-scheduler:v1.21.8: not found"
controlplane: , error: exit status 1
controlplane: [ERROR ImagePull]: failed to pull image gcr.io/k8s-staging-ci-images/kube-proxy:v1.21.8: output: time="2022-01-17T13:40:25Z" level=fatal msg="pulling image: rpc error: code = NotFound desc = failed to pull and unpack image \"gcr.io/k8s-staging-ci-images/kube-proxy:v1.21.8\": failed to resolve reference \"gcr.io/k8s-staging-ci-images/kube-proxy:v1.21.8\": gcr.io/k8s-staging-ci-images/kube-proxy:v1.21.8: not found"
controlplane: , error: exit status 1
controlplane: [preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
controlplane: error execution phase preflight
controlplane: k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run.func1
controlplane: /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:235
controlplane: k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).visitAll
controlplane: /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:421
controlplane: k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run
controlplane: /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:207
controlplane: k8s.io/kubernetes/cmd/kubeadm/app/cmd.newCmdInit.func1
controlplane: /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/init.go:152
controlplane: k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).execute
controlplane: /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:850
controlplane: k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).ExecuteC
controlplane: /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:958
controlplane: k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).Execute
controlplane: /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:895
controlplane: k8s.io/kubernetes/cmd/kubeadm/app.Run
controlplane: /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/kubeadm.go:50
controlplane: main.main
controlplane: _output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/kubeadm.go:25
controlplane: runtime.main
controlplane: /usr/local/go/src/runtime/proc.go:225
controlplane: runtime.goexit
controlplane: /usr/local/go/src/runtime/asm_amd64.s:1371
The SSH command responded with a non-zero exit status. Vagrant
assumes that this means the command failed. The output for this command
should be in the log above. Please read the output to determine what
went wrong.
I am closing this one as I think it's weird the URL links we have for latest
. Also, I believe deserve someone to dig the right link or raise it in the k8s ci community and ask what's the right link.
Instead of static version, let's explore the latest CI image version available.
Signed-off-by: Douglas Schilling Landgraf dlandgra@redhat.com