justinpinkney / stable-diffusion

MIT License
1.45k stars 266 forks source link

Sampler choices for finetuned stable diffusion model #64

Open dioxin1997 opened 1 year ago

dioxin1997 commented 1 year ago

The stable diffusion model finetuned with Pokemon dataset works well with DDIM sampler but works poorly with PLMS sampler.

dioxin1997 commented 1 year ago

00059 00058 The first one is using ddim and second is using plms, all other params are the same.

dioxin1997 commented 1 year ago

I used "512-base-ema.ckpt" of stable diffusion v2 and parameterization: "v" in the finetuning.

justinpinkney commented 1 year ago

That is interesting, not something I've looked into. I do remember some anecdotal evidence that dream booth worked better with ddim too.

lvsi-qi commented 1 year ago

这很有趣,不是我研究过的东西。我确实记得一些轶事证据表明,梦想展位在 ddim 中也效果更好。

Traceback (most recent call last): File "/content/stable-diffusion/main.py", line 923, in raise err File "/content/stable-diffusion/main.py", line 905, in trainer.fit(model, data) File "/content/stable-diffusion/env/lib/python3.9/site-packages/pytorch_lightning/trainer/trainer.py", line 553, in fit self._run(model) File "/content/stable-diffusion/env/lib/python3.9/site-packages/pytorch_lightning/trainer/trainer.py", line 918, in _run self._dispatch() File "/content/stable-diffusion/env/lib/python3.9/site-packages/pytorch_lightning/trainer/trainer.py", line 986, in _dispatch self.accelerator.start_training(self) File "/content/stable-diffusion/env/lib/python3.9/site-packages/pytorch_lightning/accelerators/accelerator.py", line 92, in start_training self.training_type_plugin.start_training(trainer) File "/content/stable-diffusion/env/lib/python3.9/site-packages/pytorch_lightning/plugins/training_type/training_type_plugin.py", line 161, in start_training self._results = trainer.run_stage() File "/content/stable-diffusion/env/lib/python3.9/site-packages/pytorch_lightning/trainer/trainer.py", line 996, in run_stage return self._run_train() File "/content/stable-diffusion/env/lib/python3.9/site-packages/pytorch_lightning/trainer/trainer.py", line 1045, in _run_train self.fit_loop.run() File "/content/stable-diffusion/env/lib/python3.9/site-packages/pytorch_lightning/loops/base.py", line 111, in run self.advance(*args, kwargs) File "/content/stable-diffusion/env/lib/python3.9/site-packages/pytorch_lightning/loops/fit_loop.py", line 200, in advance epoch_output = self.epoch_loop.run(train_dataloader) File "/content/stable-diffusion/env/lib/python3.9/site-packages/pytorch_lightning/loops/base.py", line 111, in run self.advance(*args, *kwargs) File "/content/stable-diffusion/env/lib/python3.9/site-packages/pytorch_lightning/loops/epoch/training_epoch_loop.py", line 130, in advance batch_output = self.batch_loop.run(batch, self.iteration_count, self._dataloader_idx) File "/content/stable-diffusion/env/lib/python3.9/site-packages/pytorch_lightning/loops/batch/training_batch_loop.py", line 101, in run super().run(batch, batch_idx, dataloader_idx) File "/content/stable-diffusion/env/lib/python3.9/site-packages/pytorch_lightning/loops/base.py", line 111, in run self.advance(args, kwargs) File "/content/stable-diffusion/env/lib/python3.9/site-packages/pytorch_lightning/loops/batch/training_batch_loop.py", line 148, in advance result = self._run_optimization(batch_idx, split_batch, opt_idx, optimizer) File "/content/stable-diffusion/env/lib/python3.9/site-packages/pytorch_lightning/loops/batch/training_batch_loop.py", line 202, in _run_optimization self._optimizer_step(optimizer, opt_idx, batch_idx, closure) File "/content/stable-diffusion/env/lib/python3.9/site-packages/pytorch_lightning/loops/batch/training_batch_loop.py", line 396, in _optimizer_step model_ref.optimizer_step( File "/content/stable-diffusion/env/lib/python3.9/site-packages/pytorch_lightning/core/lightning.py", line 1618, in optimizer_step optimizer.step(closure=optimizer_closure) File "/content/stable-diffusion/env/lib/python3.9/site-packages/pytorch_lightning/core/optimizer.py", line 209, in step self.optimizer_step(*args, closure=closure, profiler_name=profiler_name, **kwargs) File "/content/stable-diffusion/env/lib/python3.9/site-packages/pytorch_lightning/core/optimizer.py", line 129, in optimizer_step trainer.accelerator.optimizer_step(optimizer, self._optimizer_idx, lambda_closure=closure, kwargs) File "/content/stable-diffusion/env/lib/python3.9/site-packages/pytorch_lightning/accelerators/accelerator.py", line 296, in optimizer_step self.run_optimizer_step(optimizer, opt_idx, lambda_closure, kwargs) File "/content/stable-diffusion/env/lib/python3.9/site-packages/pytorch_lightning/accelerators/accelerator.py", line 303, in run_optimizer_step self.training_type_plugin.optimizer_step(optimizer, lambda_closure=lambda_closure, kwargs) File "/content/stable-diffusion/env/lib/python3.9/site-packages/pytorch_lightning/plugins/training_type/training_type_plugin.py", line 226, in optimizer_step optimizer.step(closure=lambda_closure, kwargs) File "/content/stable-diffusion/env/lib/python3.9/site-packages/torch/optim/lr_scheduler.py", line 65, in wrapper return wrapped(*args, kwargs) File "/content/stable-diffusion/env/lib/python3.9/site-packages/torch/optim/optimizer.py", line 113, in wrapper return func(*args, *kwargs) File "/content/stable-diffusion/env/lib/python3.9/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context return func(args, kwargs) File "/content/stable-diffusion/env/lib/python3.9/site-packages/torch/optim/adamw.py", line 119, in step loss = closure() File "/content/stable-diffusion/env/lib/python3.9/site-packages/pytorch_lightning/loops/batch/training_batch_loop.py", line 236, in _training_step_and_backward_closure result = self.training_step_and_backward(split_batch, batch_idx, opt_idx, optimizer, hiddens) File "/content/stable-diffusion/env/lib/python3.9/site-packages/pytorch_lightning/loops/batch/training_batch_loop.py", line 537, in training_step_and_backward result = self._training_step(split_batch, batch_idx, opt_idx, hiddens) File "/content/stable-diffusion/env/lib/python3.9/site-packages/pytorch_lightning/loops/batch/training_batch_loop.py", line 307, in _training_step training_step_output = self.trainer.accelerator.training_step(step_kwargs) File "/content/stable-diffusion/env/lib/python3.9/site-packages/pytorch_lightning/accelerators/accelerator.py", line 193, in training_step return self.training_type_plugin.training_step(step_kwargs.values()) File "/content/stable-diffusion/env/lib/python3.9/site-packages/pytorch_lightning/plugins/training_type/ddp.py", line 383, in training_step return self.model(args, kwargs) File "/content/stable-diffusion/env/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl return forward_call(*input, *kwargs) File "/content/stable-diffusion/env/lib/python3.9/site-packages/torch/nn/parallel/distributed.py", line 1008, in forward output = self._run_ddp_forward(inputs, kwargs) File "/content/stable-diffusion/env/lib/python3.9/site-packages/torch/nn/parallel/distributed.py", line 969, in _run_ddp_forward return module_to_run(*inputs[0], kwargs[0]) File "/content/stable-diffusion/env/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl return forward_call(*input, *kwargs) File "/content/stable-diffusion/env/lib/python3.9/site-packages/pytorch_lightning/overrides/base.py", line 82, in forward output = self.module.training_step(inputs, kwargs) File "/content/stable-diffusion/ldm/models/diffusion/ddpm.py", line 406, in training_step loss, loss_dict = self.shared_step(batch) File "/content/stable-diffusion/ldm/models/diffusion/ddpm.py", line 872, in shared_step x, c = self.get_input(batch, self.first_stage_key) File "/content/stable-diffusion/env/lib/python3.9/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context return func(*args, kwargs) File "/content/stable-diffusion/ldm/models/diffusion/ddpm.py", line 725, in get_input encoder_posterior = self.encode_first_stage(x) File "/content/stable-diffusion/env/lib/python3.9/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context return func(*args, *kwargs) File "/content/stable-diffusion/ldm/models/diffusion/ddpm.py", line 869, in encode_first_stage return self.first_stage_model.encode(x) File "/content/stable-diffusion/ldm/models/autoencoder.py", line 325, in encode h = self.encoder(x) File "/content/stable-diffusion/env/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl return forward_call(input, kwargs) File "/content/stable-diffusion/ldm/modules/diffusionmodules/model.py", line 439, in forward hs = [self.conv_in(x)] File "/content/stable-diffusion/env/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl return forward_call(*input, **kwargs) File "/content/stable-diffusion/env/lib/python3.9/site-packages/torch/nn/modules/conv.py", line 457, in forward return self._conv_forward(input, self.weight, self.bias) File "/content/stable-diffusion/env/lib/python3.9/site-packages/torch/nn/modules/conv.py", line 453, in _conv_forward return F.conv2d(input, weight, bias, self.stride, RuntimeError: Given groups=1, weight of size [128, 3, 3, 3], expected input[512, 1, 512, 3] to have 3 channels, but got 1 channels instead Pleae help me

justinpinkney commented 1 year ago

Looks like somehow your images it not the right shape, should be 1,3,512,512. The code expects nhwc images out of your dataset I think, and converts to nchw somewhere else