Closed li-liwen closed 11 months ago
Thanks for opening your first issue here! Be sure to follow the issue template!
Can your ACS mgmt servers ping your ceph monitors? If I recall correctly, when you first add a new ceph cluster, the ACS MGMT servers are involved directly, after that initial primary storage provisioning, the ACS agent on the KVM host creates all the images. So, if your managed servers can't reach your Ceph mons, you might need to temporarily route the networks to establish the new primary storage.
Thanks for the quick reply! However, the management server actually do have connections to the ceph clusters. I have tried install the Ceph-common package, ceph.conf, and keyring on the management server as well, but still no progress. Here is the output from the management server diagnosing the connection (192.168.13.251 is one ceph monitor server):
username@cloudstack:~$ ping 192.168.13.251
PING 192.168.13.251 (192.168.13.251) 56(84) bytes of data.
64 bytes from 192.168.13.251: icmp_seq=1 ttl=63 time=0.255 ms
64 bytes from 192.168.13.251: icmp_seq=2 ttl=63 time=0.239 ms
^C
--- 192.168.13.251 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1005ms
rtt min/avg/max/mdev = 0.239/0.247/0.255/0.008 ms
username@cloudstack:~$ telnet 192.168.13.251 6789
Trying 192.168.13.251...
Connected to 192.168.13.251.
Escape character is '^]'.
??H??v027???
^]
telnet> Connection closed.
username@cloudstack:~$ telnet 192.168.13.251 3300
Trying 192.168.13.251...
Connected to 192.168.13.251.
Escape character is '^]'.
ceph v2
^]
telnet> Connection closed.
It turns out that I didn't to install the RBD driver for KVM. I am able to resolve the problem by installing the driver:
sudo apt-get install libvirt-daemon-driver-storage-rbd
Closing this issue...
ISSUE TYPE
COMPONENT NAME
CLOUDSTACK VERSION
CONFIGURATION
Advanced networking
OS / ENVIRONMENT
I am using Ubuntu server 22.04 as KVM hypervisor (Libvirt 8.0.0, QEMU 6.2.0) with ceph (18.2.1) installed on all hypervisors for a hyper-converged setup. The ceph cluster was bootstrapped by Cephadm. The ceph cluster contains three separated monitor nodes , and five KVM+Ceph nodes. In the five KVM+Ceph nodes, two of them also function as monitor nodes.
SUMMARY
Cannot add ceph cluster as primary storage through Web UI. I am also running a Proxmox cluster and I am able to add the ceph cluster in Proxmox with the same user and keys. In Proxmox, it works as expected.
STEPS TO REPRODUCE
Additionally, I also tried propagate the ceph.conf file and keyrings to each hosts but not working (even with admin keys).
I have read the thread #6463 and tried the following ceph configuration commands, but still no progress:
The KVM hosts can use the ceph cluster directly if provided the admin keyring. I used the following command:
EXPECTED RESULTS
ACTUAL RESULTS
This is the cloudstack troubleshooting log right after I click the OK button in adding storage: