Closed lknite closed 1 year ago
CloudStack has two main parts, a management server and an agent. The agent runs on the hypervisor host, and I wouldn't expect it to work in a container. The container would need to support virtualization, which is doable. The hard part would be networking. When CloudStack creates a VM, it picks an IP address and a host for it. The network would somehow need to know how to route traffic for that IP address to the right host. I'm not aware of a mechanism that would allow CloudStack to configure your Kubernetes network whenever it creates or moves a VM.
If you're only interested in running the management server in Kubernetes, that should be okay. It provides the UI and communicates with the other servers. You would need one or more hypervisor hosts outside your cluster so you can create and manage VMs.
In my case I have two hosts running xcp-ng with essentially identical hardware, so I think we may be in sync in that I'm only looking to run the management server in a pod, which provides the gui, and interfaces with the two xcp-ng servers.
I'm not sure how cloudstack does its networking part, since everything is in xcp-ng and cloudstack is interfacing with it, I would have expected it to make an api call to xcp-ng when setting up a new vm and configuring its ip. If you are asking if the cloudstack pod is able to reach the xcp-ng servers the answer is yes, that is already setup, and can be accepted as a given in this use case.
It sounds like it should be doable, so I must just need to work more on my attempt at configuring my xcp-ng hosts. work in progress: https://github.com/lknite/cloudstack
I'm going to keep this open a bit longer to ask about a couple issues I am having trouble getting around with this setup... I'm going to erase everything & redeploy & try again to configure the xcp-ng hosts from scratch.
I'm seeing:
I see another ticket resolved this by installing the CloudStack XenServer Support Package (CSP). Since I have the latest and greatest xcp-ng I'm assuming CSP is already built in, looks like I need to enable bridge mode:
According to this article, to enable linux bridge mode I'd need to move vms off one server before enabling and move them back after enabling.
According to xcp-ng:
"loss of some functionality", doing some research to see if this is still a requirement. Can you tell me if bridge must be enabled still?
I guess this is more of a cloudstack question. I'll go ahead and close out this ticket and repost over there. Thank you!
Goal
To switch to apache cloudstack as my main management interface to my xcp-ng servers & use the clusterapi cloudstack provider to deploy kubernetes clusters in my xcp-ng environment.
Steps taken
Status
Unable to fully succeed at configuring cloudstack with my xcp-ng servers. Granted, I am new and may be configuring things incorrectly.
Question/discussion
Is it even possible to succeed at running cloudstack in a pod within a kubernetes cluster or will this never work due to the low level network access needed by cloudstack?
If I recall correctly (to try and get things working), I gave the pod unlimited access & enabled the bridge module on xcp-ng, which broke the xcp-ng installation, but I think the bridge module was the problem, so as long as openvswitch is ok to use it seems like the cloudstack in pod idea should work.
(I'm asking here because I figure the folks involved in this provider may have tried this configuration.)
Work around
I guess I could stand up a vm in xcp-ng and make that the cloudstack server. After it's configured and working the cloudstack web interface would be all that's needed from that point on. ... (but I try to keep to gitops as much as possible, would prefer to use a container and pass in any needed configuration via env vars).