guoqincode / Open-AnimateAnyone

Unofficial Implementation of Animate Anyone
2.9k stars 233 forks source link

No trainable param for unet in stage 1 #32

Closed harlanhong closed 9 months ago

harlanhong commented 9 months ago

unet = DDP(unet, device_ids=[local_rank], output_device=local_rank) File "/home/hongfating/miniconda3/envs/animate/lib/python3.8/site-packages/torch/nn/parallel/distributed.py", line 551, in init self._log_and_throw( File "/home/hongfating/miniconda3/envs/animate/lib/python3.8/site-packages/torch/nn/parallel/distributed.py", line 686, in _log_and_throw raise err_type(err_msg) RuntimeError: DistributedDataParallel is not needed when a module doesn't have any parameter that requires a gradient.

harlanhong commented 9 months ago

Because no motion_modules are in the UNet2DConditionModel, how to fix it.

guoqincode commented 9 months ago

I have updated the config for stage 1.