ysy31415 / unipaint

Code Implementation of "Uni-paint: A Unified Framework for Multimodal Image Inpainting with Pretrained Diffusion Model"
Apache License 2.0
103 stars 4 forks source link

inpaint_with_exemplar.ipynb #3

Open AB00k opened 9 months ago

AB00k commented 9 months ago

First of all the notebook isn't very well structured like there are no cells added to download the dependencies and models etc and there are no proper comments like how would somebody know what is the purpose of masked finetuning section.

I want to make a notebook that anybody can simply use by just running every cell on google colab. I have structured a notebook but still having some issues.

1- What is the purpose of the masked fine tuning section do we have to run it or do masked fine tuning every time we run inference with original image, masked image and a example image?

2- when I run the section named Load_Images_and_Models I got following error: https://colab.research.google.com/drive/1hpTIa2E12xlEhrjGQGSSZ-O2sHw_D9H2?usp=sharing

Please see the following modified notebook https://colab.research.google.com/drive/1hpTIa2E12xlEhrjGQGSSZ-O2sHw_D9H2?usp=sharing

ysy31415 commented 9 months ago

Hi, thanks for your nice improvement!

To answer your question:

  1. masked finetuning first let the model fit the known area of the input image, this can bring more coherent results for inpainted region, as shown below, as we finetune the model for more iters, the results are more coherent.

fig abl finetune

And yes, if you change the original/masked/example image, you have to re-run the masked finetuning, since this is a one-shot approach (which is not very nice :<)

  1. this error occurs probably because you are using a newer version of lightning, you can solve this by changing pytorch_lightning.utilities.distributed to pytorch_lightning.utilities.rank_zero in ldm/models/diffusion/ddpm.py line 20, and the issues will be resolved.

or alternatively, try downgrading pytorch_lightning to v1.7.7 or v1.6.5 may solve it.

reference: https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/11458

AB00k commented 9 months ago

There comes a new error, could you please run the notebook I have provided it would just take some minutes just run all the cells.

ModuleNotFoundError Traceback (most recent call last) in <cell line: 13>() 11 config.model.params.personalization_config.params.placeholder_strings = ['#'] # placeholder word for exemplar, '#' by default, you may change to other symbols. 12 ---> 13 model = load_model_from_config(config, ckpt, device) 14 sampler = DDIMSampler(model) 15 params_to_be_optimized = list(model.model.parameters())

11 frames /content/unipaint/ldm/models/autoencoder.py in 4 from contextlib import contextmanager 5 ----> 6 from taming.modules.vqvae.quantize import VectorQuantizer2 as VectorQuantizer 7 8 from ldm.modules.diffusionmodules.model import Encoder, Decoder

ModuleNotFoundError: No module named 'taming'

ysy31415 commented 9 months ago

Hi, the code runs well in my local machine but i dont know why it gets so many errors when running on colabs.

Anyway, you can solve this error by adding these two lines in the import section,

sys.path.append('/content/unipaint/src/taming-transformers')
sys.path.append('/content/unipaint/src/clip')

For your reference, I have slightly modified your notebook as follows, where I have solved the known errors so far:

https://colab.research.google.com/drive/17c52SboRZwokqkICutL6Kh2yq9NpKmX-?usp=sharing

I am really sorry that I didn't successfully run the entire notebook due to the memory limit, it seems the free-version colab RAM is not enough for loading the entire model :(