vexxhost / atmosphere

Simple & easy private cloud platform featuring VMs, Kubernetes & bare-metal
101 stars 28 forks source link

Quickstart Multi-node fails on nova (nova-db-sync hangs) #1452

Closed schnidrig closed 4 months ago

schnidrig commented 4 months ago

Hello

I'm new to atmosphere and I struggle getting it up and running. I tried the branch 2023.1 and the tags v1.11.3, v1.11.2. None of them would run through.

Following this guide: https://vexxhost.github.io/atmosphere/quick-start.html (multi-node)

My setup:

All versions tested fail at the same step:

Error message from ansible:

TASK [vexxhost.atmosphere.nova : Deploy Helm chart] ****************************
fatal: [ctl1]: FAILED! => {"changed": false, "command": "/usr/bin/helm upgrade -i --reset-values --create-namespace -f=/tmp/tmpcbj8fikl.yml nova /usr/local/src/nova", "msg": "Failure when executing Helm command. Exited 1.\nstdout: Release \"nova\" does not exist. Installing it now.\n\nstderr: Error: failed post-install: timed out waiting for the condition\n", "stderr": "Error: failed post-install: timed out waiting for the condition\n", "stderr_lines": ["Error: failed post-install: timed out waiting for the condition"], "stdout": "Release \"nova\" does not exist. Installing it now.\n", "stdout_lines": ["Release \"nova\" does not exist. Installing it now."]}

Checking on ctl1 on can see, that the job: nova-db-sync is the culprit. The job just hangs and never terminates.

This is the log from the pod:

root@ctl1:~# kubectl -n openstack logs nova-db-sync-kl9pn -c nova-db-sync
++ grep -Eo '[0-9]+[.][0-9]+[.][0-9]+'
++ nova-manage --version
+ NOVA_VERSION=26.2.3
+ nova-manage api_db sync
Modules with known eventlet monkey patching issues were imported prior to eventlet monkey patching: urllib3. This warning can usually be ignored if the caller is only importing and not executing nova code.

Checking in the db, the nova databases exist, but are empty

mysql> show databases;
+--------------------+
| Database           |
+--------------------+
| barbican           |
| cinder             |
| glance             |
| information_schema |
| keycloak           |
| keystone           |
| mysql              |
| nova               |
| nova_api           |
| nova_cell0         |
| performance_schema |
| placement          |
| staffeln           |
| sys                |
+--------------------+
14 rows in set (0.01 sec)
mnaser commented 4 months ago

@schnidrig that's strange, so in multimode, the sync is actually running against the Ceph cluster that lives within the multinode cloud itself, which means that if you have an underlying Ceph cluster, it's potentially doing a 9 writes fore very write (1 write into Ceph => 3 Atmosphere OSDs => each write to 3 underlying cloud OSDs) which can be pretty slow.

This is why we started having AIO using local path provisioner instead to speed things up. If you leave that job running for sometime, does it eventually progress or is it fully stuck for a long time?

schnidrig commented 4 months ago

I let the job run. 14h hours now and no change.

nova-db-sync-kl9pn                                      1/1     Running     0          14h

I had changed the ceph pool from HDD to SSD. Otherwise keycloak would time out as well. With SSD keycloak is ok.

I had tried to run nova-manage api_db version. Even that would hang. So I'm not sure that the problem is really the speed of the underlying virtual disk.

schnidrig commented 4 months ago

after 14 hours, the db is still empty:

mysql> use nova_api;
Database changed
mysql> show tables;
Empty set (0.01 sec)
schnidrig commented 4 months ago

@mnaser

I tried replacing the nova image. I tested with the following images:

With them the nova-db-sync job hung as well

But with the image docker.io/openstackhelm/nova:xena-ubuntu_focal the job nova-db-sync finally ran through. The job took 4m to complete.

root@ctl1:~# kubectl -n openstack get jobs -l application=nova
NAME                COMPLETIONS   DURATION   AGE
nova-bootstrap      0/1           13m        13m
nova-cell-setup     0/1           13m        13m
nova-db-init        1/1           19s        13m
nova-db-sync        1/1           4m         13m
nova-ks-endpoints   1/1           78s        8m13s
nova-ks-service     1/1           22s        8m35s
nova-ks-user        1/1           5m4s       6m55s
nova-rabbit-init    1/1           27s        9m2s

That is of course not a viable work around, but it demonstrates that the problem is not the slow disk.

mnaser commented 4 months ago

That's so strange. I've never really seen this behaviour.

Can you run an strace on the process by any chance to see what it's doing?

schnidrig commented 4 months ago

I found many lines like the following in the coredns logs

[ERROR] plugin/errors: 2 percona-xtradb-haproxy.openstack.svc.cluster.local.zh.switchengines.ch. A: read udp 10.0.1.189:43412->8.8.8.8:53: i/o timeout

So I replaced the DNS and now the nova-db-sync job succeeded. Super wiered that nova behaves so differently than other services.

But I think I can close the issue.

Thanks for the help.