Open deavn2236 opened 4 months ago
I have the same question, it'll be quite helpful if this repo could support multi-GPU training considering it takes such a long time to train some environments.
I think the UMI repository has provided a way of training multi GPU training. https://github.com/real-stanford/universal_manipulation_interface
I was able to make it work with this repository as well. There are only three changes : Using the environment in umi instead of the diffusion policy one..
Use this workspace : https://github.com/real-stanford/universal_manipulation_interface/blob/main/diffusion_policy/workspace/train_diffusion_unet_image_workspace.py
And in https://github.com/real-stanford/diffusion_policy/blob/main/diffusion_policy/model/common/module_attr_mixin.py make the change : self._dummy_variable = nn.Parameter(requires_grad=False)
accelerate launch --num_processes
How can I use multiple GPUs to train a model simultaneously