-
I have trained SBERT model from scratch using the code [https://github.com/UKPLab/sentence-transformers/blob/master/examples/training_transformers/training_nli.py](train_nli) and [https://github.com/…
ghost updated
3 months ago
-
Hi! I really appreciate your work! When I run your multi-gpu code, I met the following problem. It looks like some layers are in different device. Could you please help me with that?
```
Traceback…
-
### 🚀 The feature, motivation and pitch
Currently vLLM only supports LoRA adapters on nvidia gpus with compute capability >= 8.0. This request is to support >= 7.5.
The limitation here is that vLL…
-
I wanted to ask if there is any support for GPU or multi-core processing for hole? Or plans for any?
-
### System Info
- NVIDIA A100 80G * 2
- Libraries
- TensorRT-LLM: 0.11.0.dev2024052800
- Driver Version: 525.105.17
- CUDA Version: 12.4
### Who can help?
@byshiue @schetlur-nv
##…
-
It would be good to be able to use multiple-GPUs to train your model. Especially for bigger model such as Transformers and models using temporal data. This is something me and @ted9219 briefly talked …
-
Given there is already support for nccl, whats the overhead to add support for multi node gpu support for training/inference
-
Currently, the job is not split across all gpus. (in rmt_laser_snr_math.py)
Need to scale this to run across all GPUs
-
Now I have one computer(win10 os) with multiple gpus(e.g. two RTX 2070s), How to use all of them to render???
Vulkan1.1 support multi-gpu by using device groups!
Thanks very much!!!
-
### I have searched through the issues and didn't find my problem.
- [X] Confirm
### Problem
Have multiple graphic cards, but when launching multiple instances they all use the same gpu.
### P…
simlu updated
3 months ago