akash-network / support

Akash Support and Issue Tracking
5 stars 3 forks source link

Issue with Akask Provider: Misconfigured Worker Node (wrong `gpu` reporting - extremely high value) #137

Open sfxworks opened 8 months ago

sfxworks commented 8 months ago

Problem

User quantomworks experienced an issue where cloudmos was not recognizing their node despite being active and running a test deployment. Further discussions with SGC | DCNorse identified a problematic configuration where the output indicated the intriguing number for the GPU count:

"gpu":18446744073709551615

This number, 18446744073709551615, was determined to be a representation of -1 when interpreted as a unsigned 64-bit integer ^1^. The user queried this unexpected interpretation and what might cause such an issue.

Diagnosis

SGC | DCNorse suggested that when a GPU goes offline while the provider/worker node is running, the reported number of GPUs might be incorrect. They proposed resolution methods such as shutting down some operations or rebooting the node.

Subsequent Findings

Solution

The issue was resolved when correctly labeling the nodes. It seemed that since they weren't initally labeled, it assumed 0 GPU availability for consumption but also presented a -1 due to a workload using them.

{"name":"game-1","allocatable":{"cpu":11000,"gpu":1,"memory":67242971136,"storage_ephemeral":390585090631},"available":{"cpu":3000,"gpu":0,"memory":47008821248,"storage_ephemeral":390585090631}},{"name":"game-2","allocatable":{"cpu":11000,"gpu":1,"memory":67242594304,"storage_ephemeral":390586034349},"available":{"cpu":2285,"gpu":0,"memory":47172022272,"storage_ephemeral":390586034349}}

This correct labeling should allow cloudmos to correctly identify the node and compute resources.

Suggestions

Going forward, these insights would be useful to the community:

  1. A provider troubleshooting page on cloudmos would be handy for issue identification and solutions.
  2. Potential issues when mixing usage of GPUs locally and with Akash given the nature of passthrough in Akash.
andy108369 commented 7 months ago

Not sure if related, but I am working on one provider now (the other issue where provider isn't releasing all of the GPU's it has),

So the gpu number briefly spiked up to 18446744073709552000 after I've bounced all the four nvdp-nvidia-device-plugin pods (using kubectl -n nvidia-device-plugin delete pods --all command) (4 worker nodes, each has 8x A100 GPUs)

image

provider logs at that moment https://transfer.sh/H3CVdjajcx/provider-briefly-spiked-gpu-numbers.log

cc @chainzero @troian

Update:

This can be reproduced easily, just bounce the nvdp-nvidia-device-plugin pod and then query provider's 8443/status . You have to catch that brief moment, maybe that's when the nvdp-nvidia-device-plugin is being initialized. (Haven't tried just scaling them down)