Open EmilienM opened 8 months ago
I have been dreaming about a way to reduce the time it takes to set up the devstack. When working locally I create the devstack once and then reuse it for multiple tests to reduce the waiting time, but the CI does it from scratch every time.
I found out that at some point we used to have ready made devstack images and I imagine that the idea was to make the setup faster. Unfortunately I have not had much luck snapshotting a devstack and then start a new from the snapshot though... If we could find a way to do that though, it would be awesome!
I have been dreaming about a way to reduce the time it takes to set up the devstack. When working locally I create the devstack once and then reuse it for multiple tests to reduce the waiting time, but the CI does it from scratch every time.
I found out that at some point we used to have ready made devstack images and I imagine that the idea was to make the setup faster. Unfortunately I have not had much luck snapshotting a devstack and then start a new from the snapshot though... If we could find a way to do that though, it would be awesome!
I've added it to the list. @mdbooth had the same idea!
We could think about converging on the CPO and CAPO CIs. Currently they use totally different way of setting up everything.
I'd also love to set up DevStack using upstream Ansible playbooks: https://opendev.org/openstack/devstack/src/branch/master/playbooks/devstack.yaml
We could think about converging on the CPO and CAPO CIs. Currently they use totally different way of setting up everything.
I'd also love to set up DevStack using upstream Ansible playbooks: https://opendev.org/openstack/devstack/src/branch/master/playbooks/devstack.yaml
Yes! Definitely, I was about to reach you on that topic :)
@EmilienM I think a bunch of folks will buy you many beers if you fix this.
In addition to this great list, the CI also sometimes fails to provision nodes due to insufficient resource available on the hypervisor. When that happens, the node are in ERROR state and CAPO stops reconciling the machine.
Since we apparently can't make the hypervisor bigger, so we should find a way to mark the machine heavy tests exclusive one to another. Alternatively we may look into machine healthchecks so that the failed nodes can be reprovisioned.
I've fixed the gcloud errors with https://github.com/kubernetes-sigs/cluster-api-provider-openstack/pull/1804/commits/e71d3aeac9ddfe8c3068343fa4ff9341aaf3849b, which I might split in its own separate PR.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
/remove-lifecycle rotten
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
This list the items that we want to fix or improve in our CI.
Bump Ubuntu & OpenStack versions
Improve Logging
There are a lot of red herring that can confuse the developers when reading CI logs.
Simplify how devstack is configured and run to deploy OpenStack
Right now we use custom shell scripts to configure and run Devstack, which we have to maintain based on OpenStack versions, etc. It'll be nice if we could just consume how the OpenStack community does in upstream CI so we reduce the cost of maintenance.
Artifact gathering