-
I have a few questions about memory utilization during training **pose2body** with 8 GTX1080ti:
1. The original setting of your code is:
...
--gpu_ids 0,1,2,3,4,5,6,7 --batchSize 8 --max_frames_pe…
wswdx updated
5 years ago
-
It seems to me that each image uses ~5GB of GPU memory (ResNeXt-152), making it only possible to train with 2 images per GPU (TITAN X). Is that normal? I would appreciate if someone could be able to p…
-
* Are you using the latest driver?
```
❯ LANG=C pacman -Qi displaylink | head -n3 | tail -n2
Name : displaylink
Version : 6.0-0
```
* Are you using the latest EVDI versi…
-
Hi, very interesting paper!
Could you in the process of publishing the training scripts also add some intuition about the training procedure and your training metrics for the GPU's/no. of steps/mem…
-
### Your current environment
[Performance] 100% performance drop using multiple lora vs no lora(qwen-chat model)
gpu: 4 * T4
vllm version: v0.5.4
model:qwenhalf-14b-chat
### Model Input Dumps…
-
### 问题描述 Issue Description
OSError: [WinError 127] 找不到指定的程序。 Error loading "C:\Users\Administrator\AppData\Local\Programs\Python\Python312\Lib\site-packages\paddle\..\nvidia\cudnn\bin\cudnn_cnn64_9.d…
-
### Proposal to improve performance
1. What are some common tips for improving throughput?
2. Which version performs the best? Is it necessary to use an older version of VLLM?
3. Which has higher t…
-
for now, “volcano.sh/gpu-memory” can essentially only set 3 environment variables based on https://github.com/volcano-sh/devices/blob/master/pkg/plugin/nvidia/server.go#L324 to tell user how much gpu…
-
Could anyone please advise if it is possible to run inference with OVIS 1.6 on a single 4090 GPU? After loading the model, it appears to consume approximately 20GB of VRAM. I attempted an inference, b…
-