Closed pasqualet closed 3 years ago
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten
.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle rotten
Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen
.
Mark the issue as fresh with /remove-lifecycle rotten
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /close
@fejta-bot: Closing this issue.
Just to give a hint for anyone experiencing this as it also occurred to me. This seems to happen when you have docker installed and the 172.18.8.0/24 network is already occupied by a docker network bridge.
Bug description:
Vagrant fails installing etcd when etcd_deployment_type=host.
Environment:
Cloud provider or hardware configuration: Vagrant
OS (
printf "$(uname -srm)\n$(cat /etc/os-release)\n"
):Kubespray version (commit) (
git rev-parse --short HEAD
): a7b8708dFull inventory with variables (
ansible -i inventory/sample/inventory.ini all -m debug -a "var=hostvars[inventory_hostname]"
):Default inventory with the following changes:
Output of ansible run:
Anything else do we need to know:
Logs from the first instance (k8s-1):
Workaround:
I can make it works changing the Vagrant subnet: