Closed jneo8 closed 2 weeks ago
@jneo8 do you have any record of the juju debug-log or output of juju status
? That would be helpful here. It seems that the workload version was "zed" for nova-compute somehow, instead of something like '29.0.1' which would be normal. Did the workload version change during the upgrade? I'm confused, because this looks more like a bug on the charm, not with COU.
No logs, sorry.
The reason why this failed is because nova-compute upgrade failed on the openstack-upgrade
action, so the charm is in new channel but the workload is still the old one.
so the charm is in new channel but the workload is still the old one.
This makes sense, but I don't understand how that relates to the error from COU though. :thinking: I guess we need to try to reproduce the error and go from there. :slightly_smiling_face:
It's about user experience. If you encounter error on one of the sub-step. Then COU won't be able to restart because it won't able to detect the current state. So user have to finish the upgrade manually to continue.
A significant architectural change is needed. Since we won't be implementing it, I decide to close this ticket.
In my case, the hypervisor upgrade failed due to an known issue #494 . If I manually fix the issue on the machine and restart the COU, I will get error message like:
This is due to the way how COU verify the cloud and generate the upgrade plan. In this case, me as an user need to manually run all the upgrade steps on every machine, which is not user friendly.