Closed dkasanic closed 2 weeks ago
HI @dkasanic
Thanks for the issue and PR. Would you please give more information about the kubespray or ansible config to reproduce the error ?
It's very helpful :-)
Thanks you :-)
Hello, @yankay
In my env, I install kubespray as galaxy collection and then import cluster.yml
.
To reproduce the error, I believe following snippet of tasks in my playbook is enough:
- name: add crio runtime vars
set_fact:
container_manager: crio
download_container: false
skip_downloads: false
etcd_deployment_type: host
- name: Deploy cluster via Kubespray
any_errors_fatal: true
ansible.builtin.import_playbook: kubernetes_sigs.kubespray.cluster
It seems in such case, skip_downloads: true
var definition in meta/main.yml file will not kick in properly and the download
role will start download items, but that should not happen at this stage of run. It should happen after kubespray-defaults
role is executed and download
role is called from cluster.yaml
playbook.
As soon as I removed skip_downloads: false
var definition from set_fact task, deployment started working correctly.
The problem is in following meta/main.yml file:
dependencies:
- role: download
skip_downloads: true
tags:
- facts
as per ansible docs, it should be defined as:
dependencies:
- role: download
vars:
skip_downloads: true
tags:
- facts
Does #10626 fix your problem ? (since download is no longer pulled in by kubespray-defaults)
Is the problem still present on master ? I believe the PR linked in the previous message might have fixed the issue
Download role got called as dependency from
kubespray-defaults
role butskip_downloads: true
var defined in meta/main.yml was not applied. That results in downloading items early on when/etc/kubernetes
directory does not exist yet,
(Since this is no longer true)
/triage needs-information
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
/remove-lifecycle rotten
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/reopen
/remove-lifecycle rotten
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
Environment:
printf "$(uname -srm)\n$(cat /etc/os-release)\n"
): Linux 5.15.0-25-generic x86_64 PRETTY_NAME="Ubuntu 22.04 LTS"ansible --version
): ansible==8.6.1 ansible-core==2.15.6python --version
): 3.10.12Kubespray version (commit) (
git rev-parse --short HEAD
): 3acacc615Network plugin used: calico
Full inventory with variables (
ansible -i inventory/sample/inventory.ini all -m debug -a "var=hostvars[inventory_hostname]"
):skip_downloads: false
Output of ansible run:
Anything else do we need to know: Download role got called as dependency from
kubespray-defaults
role butskip_downloads: true
var defined in meta/main.yml was not applied. That results in downloading items early on when/etc/kubernetes
directory does not exist yet,