volcano-sh / devices

Device plugins for Volcano, e.g. GPU
Apache License 2.0
103 stars 43 forks source link

ListAndWatch failed when managing large memory GPU such as NVIDIA Telas V100 #19

Open zzr93 opened 2 years ago

zzr93 commented 2 years ago

This issue is an extension of #18

What happened: Applying volcano-device-plugin on a server using 8*V100 GPU, but get volcano.sh/gpu-memory:0 when describe nodes: 6481637913961_ pic Same situation did not occur when using T4 or P4. Tracing kubelet logs, found following error message: 6491637914696_ pic_hd seems like sync message is too large.

What caused this bug: volcano-device-plugin mock GPUs into a device list(every device in this list is considered as a 1MB memory block), so that different workloads can share one GPU through kubernetes device plugin mechanism. When large memory GPU such as V100 is implemented, the size of device list exceeds the bound, and ListAndWatch failed as a result.

Solutions: The key is to minimize the size of the device list, so we can consider each device as a 10MB memory block and reform the whole bookkeeping process according to this assumption. This accuracy is enough for almost all production environments.

Thor-wl commented 2 years ago

Thanks for your report and debug. The debug is meaningful and we will fix it as soon as possible.

Thor-wl commented 2 years ago

Request more voice about how much should be considered as a block(default is 1M) which is suitable for all specified GPU cards.

zzr93 commented 2 years ago

100MB per block may work fine. Inference services usually cost hundreds to thousands MB memory(train services usually cost much more than this scale), so we actually do not care memory fragments which are less than 100MB.

Thor-wl commented 2 years ago

100MB per block may work fine. Inference services usually cost hundreds to thousands MB memory(train services usually cost much more than this scale), so we actually do not care memory fragments which are less than 100MB.

IC. I'll take this issue to the weekly meeting for discussion. Are you glad to share your ideas in the meeting?

zzr93 commented 2 years ago

100MB per block may work fine. Inference services usually cost hundreds to thousands MB memory(train services usually cost much more than this scale), so we actually do not care memory fragments which are less than 100MB.

IC. I'll take this issue to the weekly meeting for discussion. Are you glad to share your ideas in the meeting?

My pleasure, I will be in.

Thor-wl commented 2 years ago

See you 15:00.

jasonliu747 commented 2 years ago

See you 15:00.

Awww that's sweet.🥺

lakerhu999 commented 2 years ago

Is this issue resolved at present?

Thor-wl commented 2 years ago

Is this issue resolved at present?

Not yet. We are considering for a graceful way to make the fix without modifing the gRPC directly.

lakerhu999 commented 2 years ago

Any update for this issue?

Thor-wl commented 2 years ago

Any update for this issue?

Not yet now. I'm sorry for developing another feature recently. Will fix that ASAP.

lakerhu999 commented 2 years ago

It's still a bug in our product as same as this issue, if fixed, please close this issue.

Thor-wl commented 2 years ago

It's still a bug in our product as same as this issue, if fixed, please close this issue.

OK, it's still on the way. I'll close the issue after the bug is fixed.

pauky commented 2 years ago

How is this going?

shinytang6 commented 2 years ago

https://github.com/volcano-sh/devices/pull/22 may resolve this issue

XueleiQiao commented 2 years ago

我们这个最新的镜像公网上有发布吗?@shinytang6