-
I ran mem_spd_test.py and got the following error:
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cuda:1!
I did not make any changes except …
-
Hi, I wrapped models in DataParallel but it does not use multiGPUs to train. Is there any lead for this?
Thank you!
-
how to run inference on multi GPUs, such as RTX4090, since it needs much more 24G?
-
This work is great, but when running on three GPUs with three prompts, I get the following error, how do I fix this?
###
Rank 1 is running.
Rank 0 is running.
Rank 2 is running.
Loading pipelin…
-
Hello everyone, I am a newcomer to MLPerf.
I would like to know whether the text-to-image in inference supports multi-card testing. Currently, I see that there is no parameter to set multi-card in t…
-
We trained custom rtdetrv2 models using multi-gpu setting. With single gpu training it works fine. But when we utilized multi-gpus training is just hanging in the first epoch for a longer time. We hav…
-
Hi,
Since DeepVariant does not support multi-GPU training ([Can model_train be run on multiple GPUs?](https://github.com/google/deepvariant/blob/r1.6.1/docs/FAQ.md#can-model_train-be-run-on-multipl…
-
how can i run the infer on multi gpus?
-
i have 2GPUs 3080x2
log
(ootd) root@autodl-container-200a43b416-549738af:~/autodl-tmp/OOTDiffusion-main/run# python gradio_ootd.py
Loading pipeline components...: 100%|█████████████████████████…
-
I want to run train_MEND_MiniGPT4_VQA() and train_SERAC_MiniGPT4_VQA() in multimodal_edit.py. VRAM of one RTX4090 is 23.65GB. It is not enough. I have eight RTX4090. I want to know how I run train_ME…