-
Hey, will the Multi GPU feature be out soon?
-
### Question Validation
- [X] I have searched both the documentation and discord for an answer.
### Question
I want to use the semantic splitter from llamaindex for document segmentation. Is…
-
I was testing "litgpt serve" for llama-3-70b using 4 A100 80G and I receive OOM error. I tried the same command on llama-2-13b and it seems like specifying the "devices" argument only load multiple re…
-
Hello! I got this error when running examples/example_chat.py and share-cap_batch_infer.py with multiple gpus. Does anyone know how to solve it ?
![image](https://github.com/InternLM/InternLM-XCompo…
-
Hi how to use multi gpu for inference. Thanks
-
I wanted to ask if there is any support for GPU or multi-core processing for hole? Or plans for any?
-
Hi, am trying to use multi-GPU training using kaggle with two Tesla T4.
my code only runs on 1 GPU, the other are not utilized.
I am able to train with custom dataset and getting acceptable results…
Ayadx updated
1 month ago
-
Thanks a lot for this amazing project, appreciate this a ton!
I happen to have a hybrid GPU setup on my laptop( i.e. Nvidia+AMD) and I have confirmed that my display output is wired to my Nvidi…
-
### 🚀 The feature, motivation and pitch
Currently vLLM only supports LoRA adapters on nvidia gpus with compute capability >= 8.0. This request is to support >= 7.5.
The limitation here is that vLL…
-
Do you plan to release multi-gpu support?