-
Dear authors, Happy Chinese New Year!
Can you tell me if the model supports multi-card training? When I try to use the original code to train on a machine with two A100s (using `python run.py --args…
-
That's definitely an impressive work!
I'm trying to reproduce some results on inpainting task and had some concern about the data_parallel mode.
Referring to the codes, batch_size is 4 for single…
-
@danielhanchen
Hi, I remember that unsloth-2024.1 supports multi-GPUs training llama with deepspeed without raising this RuntimeError('Unsloth currently does not support multi GPU setups - but we ar…
-
Hello,
How can I train this model on multi GPUs ?
Thx,
Catruk
-
Thank you for making your work public, and can this project support multiple GPUs? thanks for your reply.
-
Hello, Mr.yu
I have a problem when I try to train the model in multi GPUs
```
device_ids = [0, 1]
model = torch.nn.parallel.DataParallel(model, device_ids=device_ids)
model.to(device)
```
W…
-
I need help training Flux Lora on multiple GPUs. The memory on a single GPU is not sufficient, so I want to train on multiple GPUs. However, configuring device: cuda:0,1 in the config file doesn't see…
-
### Your current environment
```text
Collecting environment information...
WARNING 07-23 19:11:42 _custom_ops.py:14] Failed to import from vllm._C with ModuleNotFoundError("No module named 'vllm.…
-
The llm module is support infer use `vllm` or use multi GPUs?
If not, when will these features be implemented ?
-
Dear All,
I would like to run ALIGNN on multi GPUs. When I checked the code I could not find any option.
Is there any method to run ALIGNN on multi GPUs such as using PyTorch Lightning or DDP fu…