-
Hi! Thank you for your great work!!
I followed the default `emage.yaml` you provided and only changed `ddp: True`, leaving most other options unchanged. I also used the dataset you provided. However,…
-
## 📚 Documentation
This isn't necessarily an issue with the documentation, but an inconsistency between the documentation and the simplest [Pytorch XLA example](https://github.com/pytorch/xla/blob/…
-
Hi,
Thanks for open-sourcing this work. When I was trying to train the teacher network of LiDAR and fusion I wasn't able to start it with multiple GPU. Single-GPU training works. Multi-GPU training…
-
**Describe the bug**
During single-machine multi-GPU training, the memory changes abnormally.
Training with 2 docker containers, which are bound to 1 Gpus and 2 GPUs
batchsize=2,single-machine 1 …
-
I only have one gpu, and I want to successfully run the pre-trained model and generate images, what should I do. Where should the code be changed? Please explain in detail, because I am a newcomera in…
-
## 🐛 Bug
First of all, congratulations for working at a high level with the interface using learn2learn. The bug is that when a model is trained using the meta learning method and then submitted to…
-
Epoch gpu_mem box obj cls total labels img_size
4/299 16G nan 0.01189 0 nan 1 1280: 100%|██████████| 1014/1014 [11:14
-
thanks for your sharing, but I only have 1 GPU , it can not be trained
I see the reason why need multi-GPU is for ‘effect of disabling shuffle BN to MoCo’
but I can not understand why must shuf…
-
## ❓ Questions and Help
Hi, I get the following error when trying to use FSDP with checkpoint_wrapper:
RuntimeError: None of the outputs have requires_grad=True, this checkpoint() is not necessa…
-
Dear All,
I would like to run ALIGNN on multi GPUs. When I checked the code I could not find any option.
Is there any method to run ALIGNN on multi GPUs such as using PyTorch Lightning or DDP fu…