rinongal / textual_inversion

MIT License
2.87k stars 278 forks source link

Training Embedding Error #145

Closed 2454511550Lin closed 1 year ago

2454511550Lin commented 1 year ago

Dear author,

I run into the following error when training the embeddings. I follow your commend in the README:

python main.py --base configs/latent-diffusion/txt2img-1p4B-finetune.yaml -t --actual_resume models/ldm/text2img-large/model.ckpt -n attest --gpus 0, --data_root img/test --init_word dog

And the following error occur: Traceback (most recent call last): File "main.py", line 803, in <module> trainer.fit(model, data) File "/dir/miniconda3/envs/ldm_ti/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 740, in fit self._call_and_handle_interrupt( File "/dir/miniconda3/envs/ldm_ti/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 685, in _call_and_handle_interrupt return trainer_fn(*args, **kwargs) File "/dir/miniconda3/envs/ldm_ti/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 777, in _fit_impl self._run(model, ckpt_path=ckpt_path) File "/dir/miniconda3/envs/ldm_ti/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 1199, in _run self._dispatch() File "/dir/miniconda3/envs/ldm_ti/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 1279, in _dispatch self.training_type_plugin.start_training(self) File "/dir/miniconda3/envs/ldm_ti/lib/python3.8/site-packages/pytorch_lightning/plugins/training_type/training_type_plugin.py", line 202, in start_training self._results = trainer.run_stage() File "/dir/miniconda3/envs/ldm_ti/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 1289, in run_stage return self._run_train() File "/dir/miniconda3/envs/ldm_ti/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 1311, in _run_train self._run_sanity_check(self.lightning_module) File "/dir/miniconda3/envs/ldm_ti/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 1375, in _run_sanity_check self._evaluation_loop.run() File "/dir/miniconda3/envs/ldm_ti/lib/python3.8/site-packages/pytorch_lightning/loops/base.py", line 145, in run self.advance(*args, **kwargs) File "/dir/miniconda3/envs/ldm_ti/lib/python3.8/site-packages/pytorch_lightning/loops/dataloader/evaluation_loop.py", line 110, in advance dl_outputs = self.epoch_loop.run(dataloader, dataloader_idx, dl_max_batches, self.num_dataloaders) File "/dir/miniconda3/envs/ldm_ti/lib/python3.8/site-packages/pytorch_lightning/loops/base.py", line 145, in run self.advance(*args, **kwargs) File "/dir/miniconda3/envs/ldm_ti/lib/python3.8/site-packages/pytorch_lightning/loops/epoch/evaluation_epoch_loop.py", line 122, in advance output = self._evaluation_step(batch, batch_idx, dataloader_idx) File "/dir/miniconda3/envs/ldm_ti/lib/python3.8/site-packages/pytorch_lightning/loops/epoch/evaluation_epoch_loop.py", line 217, in _evaluation_step output = self.trainer.accelerator.validation_step(step_kwargs) File "/dir/miniconda3/envs/ldm_ti/lib/python3.8/site-packages/pytorch_lightning/accelerators/accelerator.py", line 236, in validation_step return self.training_type_plugin.validation_step(*step_kwargs.values()) File "/dir/miniconda3/envs/ldm_ti/lib/python3.8/site-packages/pytorch_lightning/plugins/training_type/ddp.py", line 444, in validation_step return self.model(*args, **kwargs) File "/dir/.local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "/dir/.local/lib/python3.8/site-packages/torch/nn/parallel/distributed.py", line 1156, in forward output = self._run_ddp_forward(*inputs, **kwargs) File "/dir/.local/lib/python3.8/site-packages/torch/nn/parallel/distributed.py", line 1110, in _run_ddp_forward return module_to_run(*inputs[0], **kwargs[0]) # type: ignore[index] File "/dir/.local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "/dir/miniconda3/envs/ldm_ti/lib/python3.8/site-packages/pytorch_lightning/overrides/base.py", line 92, in forward output = self.module.validation_step(*inputs, **kwargs) File "/dir/.local/lib/python3.8/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context return func(*args, **kwargs) File "/dir/textual_inversion/ldm/models/diffusion/ddpm.py", line 368, in validation_step _, loss_dict_no_ema = self.shared_step(batch) File "/dir/textual_inversion/ldm/models/diffusion/ddpm.py", line 907, in shared_step loss = self(x, c) File "/dir/.local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "/dir/textual_inversion/ldm/models/diffusion/ddpm.py", line 920, in forward return self.p_losses(x, c, t, *args, **kwargs) File "/dir/textual_inversion/ldm/models/diffusion/ddpm.py", line 1071, in p_losses logvar_t = self.logvar[t].to(self.device) RuntimeError: indices should be either on cpu or on the same device as the indexed tensor (cpu)

I didn't update any package, and only create the environment according to environment .yaml. It would be greatly appreciated if you can provide any help with this issue.

doracsillag commented 1 year ago

@2454511550Lin Please check this answer, it seems to work as a quick fix on my side: https://github.com/rinongal/textual_inversion/issues/124#issuecomment-1346919527

rinongal commented 1 year ago

As noted in @doracsillag , changing to logvar_t = self.logvar[t.cpu()].to(self.device) seems to fix this.

I have no idea what's causing the problem. Seems to be a pytorch versioning issue and it's not unique to our repo.

2454511550Lin commented 1 year ago

Thank you @rinongal @doracsillag. That solves the problem. Looks like t is allocated at the GPU which causes this error.