Open zzr93 opened 2 years ago
Thanks for your report and debug. The debug is meaningful and we will fix it as soon as possible.
Request more voice about how much should be considered as a block(default is 1M) which is suitable for all specified GPU cards.
100MB per block may work fine. Inference services usually cost hundreds to thousands MB memory(train services usually cost much more than this scale), so we actually do not care memory fragments which are less than 100MB.
100MB per block may work fine. Inference services usually cost hundreds to thousands MB memory(train services usually cost much more than this scale), so we actually do not care memory fragments which are less than 100MB.
IC. I'll take this issue to the weekly meeting for discussion. Are you glad to share your ideas in the meeting?
100MB per block may work fine. Inference services usually cost hundreds to thousands MB memory(train services usually cost much more than this scale), so we actually do not care memory fragments which are less than 100MB.
IC. I'll take this issue to the weekly meeting for discussion. Are you glad to share your ideas in the meeting?
My pleasure, I will be in.
See you 15:00.
See you 15:00.
Awww that's sweet.🥺
Is this issue resolved at present?
Is this issue resolved at present?
Not yet. We are considering for a graceful way to make the fix without modifing the gRPC directly.
Any update for this issue?
Any update for this issue?
Not yet now. I'm sorry for developing another feature recently. Will fix that ASAP.
It's still a bug in our product as same as this issue, if fixed, please close this issue.
It's still a bug in our product as same as this issue, if fixed, please close this issue.
OK, it's still on the way. I'll close the issue after the bug is fixed.
How is this going?
https://github.com/volcano-sh/devices/pull/22 may resolve this issue
我们这个最新的镜像公网上有发布吗?@shinytang6
This issue is an extension of #18
What happened: Applying volcano-device-plugin on a server using 8*V100 GPU, but get volcano.sh/gpu-memory:0 when describe nodes: Same situation did not occur when using T4 or P4. Tracing kubelet logs, found following error message: seems like sync message is too large.
What caused this bug: volcano-device-plugin mock GPUs into a device list(every device in this list is considered as a 1MB memory block), so that different workloads can share one GPU through kubernetes device plugin mechanism. When large memory GPU such as V100 is implemented, the size of device list exceeds the bound, and ListAndWatch failed as a result.
Solutions: The key is to minimize the size of the device list, so we can consider each device as a 10MB memory block and reform the whole bookkeeping process according to this assumption. This accuracy is enough for almost all production environments.