-
Hello Xdever! I expect to train the network by multi-gpus, but in the version, it seems that when I open several GPUs visible such as GPU0,1, the result is that the training is not accelerated while t…
-
Hi, thank you so much for releasing this great code base!
I noticed that your Laion blog says that the pre-training of OpenLM 1B/7B took place on 128 or 256 A100s. Therefore, I'm wondering if the c…
-
I have tried to add "torch.nn.DataParallel" to every model, and modifies the batch_size. However, "nvidia-smi" still shows only one gpu is using. I cannot figure out what's wrong. Thank you in advanc…
-
Hi there!
I am trying to train a 3d_fullres model, but the patch size, despite maximizing the memory occupied in one of the GPUs, is too small. Hence, I would like to try multi-GPU training but I c…
-
Hey guys great work with this. We were wondering if and (approximately) when you will be releasing the multi gpu inferencing. Furthermore what is the time taken with default settings to inference a 6 …
-
Hi, thank you for you work. I have a issue about multi GPUs. I have four GPUs and I add four screen. I run some andriod simulators on the screen. I found total four screen load on one single GPU. Is t…
-
Hi,
Thanks for the woderful job.
I encountered a error caused by distributed training, maybe? I ran the code on multi-gpus and got the error below:
`RuntimeError: Expected to have finished reductio…
-
Could u update code for multi-GPUs training?
I've tried change it for multi-GPUs training, but got some bugs
I've changed `train.py` as follows:
```python
if torch.cuda.device_count() > 1:
…
-
Hi,
The multi-gpu setting does not work.
return forward_call(*input, **kwargs)
File "/home/hossein/projects/Ladder-Side-Tuning-main/seq2seq/third_party/models/t5/modeling_side_t5.py", line …
-
great work guys, I was just curious if there is going to be support for multiple gpu's in the future.
Thanks