gyptazy / ProxLB

ProxLB - (Re)Balance VM Workloads Across Nodes in Proxmox Clusters. A Load Balancer for Proxmox - and more!
https://proxlb.de
GNU General Public License v3.0
138 stars 6 forks source link

Multiple API hosts #60

Closed JonahMMay closed 2 weeks ago

JonahMMay commented 3 weeks ago

I'm migrating workloads from a vSphere/vCenter cluster over to Proxmox and trying to make functionality as similar as possible. I deployed a VM that is running ProxLB but it seems like I can only specify a single API host. Is it possible to specify multiple API hosts in case one is offline?

I know if I create multiple VMs or run ProxLB on the hosts I can talk to multiple hosts however I'd like the central point of management so I don't have to synchronize config changes between multiple ProxLB instances.

gyptazy commented 3 weeks ago

Hey @JonahMMay,

from request to solution in less than 60 minutes ;)

thanks for your input! I think most ones will use a load balancer in front of the nodes where the selection is done by nginx, HAProxy service or similar services. However, I can understand that running a dedicated service is not technically needed and might bring further complexity with it. Therefore, it sounds reasonable to me and let's integrate it. The changes are small and it will be part of release 1.0.3.

Tests

Config:

# grep host /etc/proxlb/proxlb.conf 
api_host: 8.8.8.8,10.10.10.211,10.10.10.212

ProxLB dryrun mode:

<6> ProxLB: Info: [logger]: Logger verbosity got updated to: INFO.
<4> ProxLB: Warning: [api-connection]: API connection does not verify SSL certificate.
<6> ProxLB: Info: [api-connect-get-host]: Multiple hosts for API connection are given. Testing hosts for further usage.
<6> ProxLB: Info: [api-connect-get-host]: Testing host 8.8.8.8 on port tcp/8006.
<6> ProxLB: Info: [api-connect-test-host]: Timeout for host 8.8.8.8 is set to 2 seconds.
<2> ProxLB: Error: [api-connect-test-host]: Host 8.8.8.8 is unreachable on port tcp/8006.
<6> ProxLB: Info: [api-connect-get-host]: Testing host 10.10.10.211 on port tcp/8006.
<6> ProxLB: Info: [api-connect-test-host]: Timeout for host 10.10.10.211 is set to 2 seconds.
<6> ProxLB: Info: [api-connect-test-host]: Host 10.10.10.211 is reachable on port tcp/8006.
<6> ProxLB: Info: [api-connection]: API connection succeeded to host: 10.10.10.211.

If you like, you can already give it a try from the feature branch #55. Take care, with 1.0.3 a new config schema will be introduced (you can find the new config already in the head).

The changes can be found here: https://github.com/gyptazy/ProxLB/blob/604eeb5716cf713460c42e49c174625dfee5f51a/proxlb#L259-L345

To make use of this feature, simply define your hosts comma separated. ProxLB will then also test and validate the given hosts for further basic connectivity on the Proxmox API port before proceeding. So, if there's a typo or a node offline, this will be validated and ProxLB will then try the next one.

Hope it helps!

Cheers, gyptazy

JonahMMay commented 3 weeks ago

Thanks for introducing this so quickly! I'm currently running 1.0.2 installed from the .deb package. What directory do I need to place the branch files in to update the service?

JonahMMay commented 3 weeks ago

Nevermind. Decided to disable the system service and flip it over to docker so I can just use git commands to change branches now and in the future.

gyptazy commented 3 weeks ago

Thanks for introducing this so quickly! I'm currently running 1.0.2 installed from the .deb package. What directory do I need to place the branch files in to update the service?

Just to have it complete here.

Note: Config and artifact may differ depending on the used branch

JonahMMay commented 3 weeks ago

When I try to start the docker container I get an error: <2> ProxLB: Error: [config]: Could not find the required options in config file.

Here's the config I created for the new branch:

[proxmox]
api_host: labhost03.jonah.home,labhost04.jonah.home
api_user: root@pam
api_pass: password_here
verify_ssl: 0
[vm_balancing]
enable: 1
method: memory
mode: used
mode_option: byte
type: vm
balanciness: 10
parallel_migrations: 1
ignore_nodes: ''
ignore_vms: GLaDOS-01
master_only: 0
[storage_balancing]
enable: 0
balanciness: 10
parallel_migrations: 1
[update_service]
enable: 0
[api]
enable: 0
[service]
daemon: 1
schedule: 1
log_verbosity: INFO
config_version: 3
gyptazy commented 3 weeks ago

The config looks good to me. What image version of ProxLB are you using? The one on my container registry only ships stable versions (1.0.2). From head, you need to build it yourself

JonahMMay commented 3 weeks ago

I have the file created at /etc/proxlb/proxlb.conf and I'm calling docker run -it --rm -v $(pwd)/proxlb.conf:/etc/proxlb/proxlb.conf proxlb

No worries, I appreciate the quick responses already given your time zone. Right now I'm just flipping my home lab in preparation of a vendor my company uses launching Proxmox support to the public later this quarter. I have their beta and was wanting to tinker with it. Figured while I was here I'd replace my vCenter cluster as best as I could and write some blog articles/tutorials about it, so that's all this is for right now. Down the road my company will probably switch a number of our systems to Proxmox and we'll want to use this on those clusters.

If there's anything else in this branch you'd like me to test let me know. I just can't really do the storage rebalancing, I'm using ZFS over iSCSI with TrueNAS for all my datastores.

gyptazy commented 3 weeks ago

Which image version do you use? Did you create a new image on your own from head?

JonahMMay commented 3 weeks ago

I did the following commands:

git clone https://github.com/gyptazy/ProxLB.git cd ProxLB git checkout feature/51-storage-balancing-feature docker build -t proxlb .

Calling the python script manually from the terminal using python3 proxlb in the directory seems to work fine, so it seems to be something related to the docker container specifically.

gyptazy commented 3 weeks ago

That should work… is there anything you need help right now?

JonahMMay commented 3 weeks ago

Nope, I think I'm all set for the rest of the day. I'll try to trace down the Docker issue or swap back to a system service tomorrow and comment here if I figure it out. Thanks again, and enjoy the rest of your night!