TencentARC / PhotoMaker

PhotoMaker [CVPR 2024]
https://photo-maker.github.io/
Other
9.55k stars 767 forks source link

Impossible to run both Gradio demo and notebook demo #163

Open ThomasHezard opened 3 months ago

ThomasHezard commented 3 months ago

Hello,

After #157 resolution, I finally tested gradio demo and tried to have fun with the model in notebooks.
I had issues with both... 😕

Gradio demo

After a few missing dependencies (einops and onnxruntime), I managed to start the demo. However, with examples from the demo or my own images, I have the following error at each inference:

Debug] Generate image using aspect ratio [Instagram (1:1)] => 1024 x 1024
Start inference...
[Debug] Prompt: sci-fi, closeup portrait photo of a man img wearing the sunglasses in Iron man suit, face, slim body, high quality, film grain, 
[Debug] Neg Prompt:  (asymmetry, worst quality, low quality, illustration, 3d, 2d, painting, cartoons, sketch), open mouth
10
Traceback (most recent call last):
  File "[...]/envs/photomaker/lib/python3.10/site-packages/gradio/queueing.py", line 536, in process_events
    response = await route_utils.call_process_api(
  File "[...]/envs/photomaker/lib/python3.10/site-packages/gradio/route_utils.py", line 276, in call_process_api
    output = await app.get_blocks().process_api(
  File "[...]/envs/photomaker/lib/python3.10/site-packages/gradio/blocks.py", line 1923, in process_api
    result = await self.call_function(
  File "[...]/envs/photomaker/lib/python3.10/site-packages/gradio/blocks.py", line 1508, in call_function
    prediction = await anyio.to_thread.run_sync(  # type: ignore
  File "[...]/envs/photomaker/lib/python3.10/site-packages/anyio/to_thread.py", line 56, in run_sync
    return await get_async_backend().run_sync_in_worker_thread(
  File "[...]/envs/photomaker/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 2177, in run_sync_in_worker_thread
    return await future
  File "[...]/envs/photomaker/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 859, in run
    result = context.run(func, *args)
  File "[...]/envs/photomaker/lib/python3.10/site-packages/gradio/utils.py", line 818, in wrapper
    response = f(*args, **kwargs)
  File "[...]/envs/photomaker/lib/python3.10/site-packages/gradio/utils.py", line 818, in wrapper
    response = f(*args, **kwargs)
  File "[...]/PhotoMaker/gradio_demo/app.py", line 99, in generate_image
    images = pipe(
  File "[...]/envs/photomaker/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
    return func(*args, **kwargs)
  File "[...]/envs/photomaker/lib/python3.10/site-packages/photomaker/pipeline.py", line 708, in __call__
    prompt_embeds = self.id_encoder(id_pixel_values, prompt_embeds, class_tokens_mask)
  File "[...]/envs/photomaker/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1553, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "[...]/envs/photomaker/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1562, in _call_impl
    return forward_call(*args, **kwargs)
  File "[...]/envs/photomaker/lib/python3.10/site-packages/photomaker/model.py", line 107, in forward
    updated_prompt_embeds = self.fuse_module(prompt_embeds, id_embeds, class_tokens_mask)
  File "[...]/envs/photomaker/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1553, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "[...]/envs/photomaker/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1562, in _call_impl
    return forward_call(*args, **kwargs)
  File "[...]/envs/photomaker/lib/python3.10/site-packages/photomaker/model.py", line 83, in forward
    stacked_id_embeds = self.fuse_fn(image_token_embeds, valid_id_embeds)
  File "[...]/envs/photomaker/lib/python3.10/site-packages/photomaker/model.py", line 49, in fuse_fn
    stacked_id_embeds = torch.cat([prompt_embeds, id_embeds], dim=-1)
RuntimeError: Sizes of tensors must match except in dimension 1. Expected size 8 but got size 4 for tensor number 1 in the list.

Notebook demo

With instantiating the pipeline in the notebook demo, or in my own script/notebook using the code example, using hf_hub_download or downloading the model manually, I have the following error:

---------------------------------------------------------------------------
RuntimeError                              Traceback (most recent call last)
Cell In[3], [line 12](vscode-notebook-cell:?execution_count=3&line=12)
      [3](vscode-notebook-cell:?execution_count=3&line=3) photomaker_ckpt = hf_hub_download(repo_id="TencentARC/PhotoMaker", filename="photomaker-v1.bin", repo_type="model")
      [5](vscode-notebook-cell:?execution_count=3&line=5) pipe = PhotoMakerStableDiffusionXLPipeline.from_pretrained(
      [6](vscode-notebook-cell:?execution_count=3&line=6)     base_model_path,
      [7](vscode-notebook-cell:?execution_count=3&line=7)     torch_dtype=torch.bfloat16,
      [8](vscode-notebook-cell:?execution_count=3&line=8)     use_safetensors=True,
      [9](vscode-notebook-cell:?execution_count=3&line=9)     variant="fp16",
     [10](vscode-notebook-cell:?execution_count=3&line=10) ).to(device)
---> [12](vscode-notebook-cell:?execution_count=3&line=12) pipe.load_photomaker_adapter(
     [13](vscode-notebook-cell:?execution_count=3&line=13)     os.path.dirname(photomaker_ckpt),
     [14](vscode-notebook-cell:?execution_count=3&line=14)     subfolder="",
     [15](vscode-notebook-cell:?execution_count=3&line=15)     weight_name=os.path.basename(photomaker_ckpt),
     [16](vscode-notebook-cell:?execution_count=3&line=16)     trigger_word="img"
     [17](vscode-notebook-cell:?execution_count=3&line=17) )
     [18](vscode-notebook-cell:?execution_count=3&line=18) pipe.id_encoder.to(device)
     [21](vscode-notebook-cell:?execution_count=3&line=21) #pipe.scheduler = EulerDiscreteScheduler.from_config(pipe.scheduler.config)
     [22](vscode-notebook-cell:?execution_count=3&line=22) #pipe.fuse_lora()

File [...]/envs/photomaker/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py:114, in validate_hf_hub_args.<locals>._inner_fn(*args, **kwargs)
    [111]([...]/envs/photomaker/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py:111) if check_use_auth_token:
    [112]([...]/envs/photomaker/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py:112)     kwargs = smoothly_deprecate_use_auth_token(fn_name=fn.__name__, has_token=has_token, kwargs=kwargs)
--> [114]([...]/envs/photomaker/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py:114) return fn(*args, **kwargs)

File [...]/PhotoMaker/photomaker/pipeline.py:239, in PhotoMakerStableDiffusionXLPipeline.load_photomaker_adapter(self, pretrained_model_name_or_path_or_dict, weight_name, subfolder, trigger_word, pm_version, **kwargs)
    [236]([...]/PhotoMaker/photomaker/pipeline.py:236) else:
    [237]([...]/PhotoMaker/photomaker/pipeline.py:237)     raise NotImplementedError(f"The PhotoMaker version [{pm_version}] does not support")
--> [239]([...]/PhotoMaker/photomaker/pipeline.py:239) id_encoder.load_state_dict(state_dict["id_encoder"], strict=True)
    [240]([...]/PhotoMaker/photomaker/pipeline.py:240) id_encoder = id_encoder.to(self.device, dtype=self.unet.dtype)    
    [241]([...]/PhotoMaker/photomaker/pipeline.py:241) self.id_encoder = id_encoder

File [...]/envs/photomaker/lib/python3.10/site-packages/torch/nn/modules/module.py:2215, in Module.load_state_dict(self, state_dict, strict, assign)
   [2210]([...]/envs/photomaker/lib/python3.10/site-packages/torch/nn/modules/module.py:2210)         error_msgs.insert(
   [2211]([...]/envs/photomaker/lib/python3.10/site-packages/torch/nn/modules/module.py:2211)             0, 'Missing key(s) in state_dict: {}. '.format(
   [2212]([...]/envs/photomaker/lib/python3.10/site-packages/torch/nn/modules/module.py:2212)                 ', '.join(f'"{k}"' for k in missing_keys)))
   [2214]([...]/envs/photomaker/lib/python3.10/site-packages/torch/nn/modules/module.py:2214) if len(error_msgs) > 0:
-> [2215]([...]/envs/photomaker/lib/python3.10/site-packages/torch/nn/modules/module.py:2215)     raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format(
   [2216]([...]/envs/photomaker/lib/python3.10/site-packages/torch/nn/modules/module.py:2216)                        self.__class__.__name__, "\n\t".join(error_msgs)))
   [2217]([...]/envs/photomaker/lib/python3.10/site-packages/torch/nn/modules/module.py:2217) return _IncompatibleKeys(missing_keys, unexpected_keys)

RuntimeError: Error(s) in loading state_dict for PhotoMakerIDEncoder_CLIPInsightfaceExtendtoken:
    Missing key(s) in state_dict: "qformer_perceiver.token_proj.0.weight", "qformer_perceiver.token_proj.0.bias", "qformer_perceiver.token_proj.2.weight", "qformer_perceiver.token_proj.2.bias", "qformer_perceiver.token_norm.weight", "qformer_perceiver.token_norm.bias", "qformer_perceiver.perceiver_resampler.proj_in.weight", "qformer_perceiver.perceiver_resampler.proj_in.bias", "qformer_perceiver.perceiver_resampler.proj_out.weight", "qformer_perceiver.perceiver_resampler.proj_out.bias", "qformer_perceiver.perceiver_resampler.norm_out.weight", "qformer_perceiver.perceiver_resampler.norm_out.bias", "qformer_perceiver.perceiver_resampler.layers.0.0.norm1.weight", "qformer_perceiver.perceiver_resampler.layers.0.0.norm1.bias", "qformer_perceiver.perceiver_resampler.layers.0.0.norm2.weight", "qformer_perceiver.perceiver_resampler.layers.0.0.norm2.bias", "qformer_perceiver.perceiver_resampler.layers.0.0.to_q.weight", "qformer_perceiver.perceiver_resampler.layers.0.0.to_kv.weight", "qformer_perceiver.perceiver_resampler.layers.0.0.to_out.weight", "qformer_perceiver.perceiver_resampler.layers.0.1.0.weight", "qformer_perceiver.perceiver_resampler.layers.0.1.0.bias", "qformer_perceiver.perceiver_resampler.layers.0.1.1.weight", "qformer_perceiver.perceiver_resampler.layers.0.1.3.weight", "qformer_perceiver.perceiver_resampler.layers.1.0.norm1.weight", "qformer_perceiver.perceiver_resampler.layers.1.0.norm1.bias", "qformer_perceiver.perceiver_resampler.layers.1.0.norm2.weight", "qformer_perceiver.perceiver_resampler.layers.1.0.norm2.bias", "qformer_perceiver.perceiver_resampler.layers.1.0.to_q.weight", "qformer_perceiver.perceiver_resampler.layers.1.0.to_kv.weight", "qformer_perceiver.perceiver_resampler.layers.1.0.to_out.weight", "qformer_perceiver.perceiver_resampler.layers.1.1.0.weight", "qformer_perceiver.perceiver_resampler.layers.1.1.0.bias", "qformer_perceiver.perceiver_resampler.layers.1.1.1.weight", "qformer_perceiver.perceiver_resampler.layers.1.1.3.weight", "qformer_perceiver.perceiver_resampler.layers.2.0.norm1.weight", "qformer_perceiver.perceiver_resampler.layers.2.0.norm1.bias", "qformer_perceiver.perceiver_resampler.layers.2.0.norm2.weight", "qformer_perceiver.perceiver_resampler.layers.2.0.norm2.bias", "qformer_perceiver.perceiver_resampler.layers.2.0.to_q.weight", "qformer_perceiver.perceiver_resampler.layers.2.0.to_kv.weight", "qformer_perceiver.perceiver_resampler.layers.2.0.to_out.weight", "qformer_perceiver.perceiver_resampler.layers.2.1.0.weight", "qformer_perceiver.perceiver_resampler.layers.2.1.0.bias", "qformer_perceiver.perceiver_resampler.layers.2.1.1.weight", "qformer_perceiver.perceiver_resampler.layers.2.1.3.weight", "qformer_perceiver.perceiver_resampler.layers.3.0.norm1.weight", "qformer_perceiver.perceiver_resampler.layers.3.0.norm1.bias", "qformer_perceiver.perceiver_resampler.layers.3.0.norm2.weight", "qformer_perceiver.perceiver_resampler.layers.3.0.norm2.bias", "qformer_perceiver.perceiver_resampler.layers.3.0.to_q.weight", "qformer_perceiver.perceiver_resampler.layers.3.0.to_kv.weight", "qformer_perceiver.perceiver_resampler.layers.3.0.to_out.weight", "qformer_perceiver.perceiver_resampler.layers.3.1.0.weight", "qformer_perceiver.perceiver_resampler.layers.3.1.0.bias", "qformer_perceiver.perceiver_resampler.layers.3.1.1.weight", "qformer_perceiver.perceiver_resampler.layers.3.1.3.weight".

Question

Any idea how to make it work? Is there something I did wrong?

RichardGoldbergSchool commented 3 months ago

Also having this issue.

shadowwider commented 3 months ago

I also prompted this error, but just install these two libraries. python.exe -m pip install onnxruntime-gpu

Paper99 commented 3 months ago

Try the demo scripts in this folder: https://github.com/TencentARC/PhotoMaker/tree/main/inference_scripts

ThomasHezard commented 3 months ago

Try the demo scripts in this folder: https://github.com/TencentARC/PhotoMaker/tree/main/inference_scripts

After fixing inference_pmv2.py l.46 and l.48 (photomaker_variable should be photomaker_ckpt), the script runs fine.

[EDIT]
After some digging, I found that there is a gradio_demo/app_v2.py script, this one rune after installing all the necessary dependencies (requirements.txt, git+https://github.com/TencentARC/PhotoMaker.git, einops, onnxruntime-gpu)

One question: a lot of download happens at the first start of the demo, is there any way to download all dependencies in advance with a bash or python script?

canrly commented 2 months ago

Also having this issue.

hotpot-killer commented 2 months ago

same issue

AyushRanjan15 commented 1 month ago

I experienced the same issue. The things works but the default values in the 'photomaker_demo.ipynb' does not work by itself.

In the

photomaker_ckpt = hf_hub_download(repo_id="TencentARC/PhotoMaker", filename="photomaker-v1.bin", repo_type="model")

The photomaker_ckpt is using v1 values

pipe = PhotoMakerStableDiffusionXLPipeline.from_pretrained(
    base_model_path,
    torch_dtype=torch.bfloat16,
    use_safetensors=True,
    variant="fp16",
).to(device)

pipe.load_photomaker_adapter(
    os.path.dirname(photomaker_ckpt),
    subfolder="",
    weight_name=os.path.basename(photomaker_ckpt),
    trigger_word="img"
)

Uses v2 by default. This leads the error.

Snippent from photomaker/pipeline.py


class PhotoMakerStableDiffusionXLPipeline(StableDiffusionXLPipeline):
    @validate_hf_hub_args
    def load_photomaker_adapter(
        self,
        pretrained_model_name_or_path_or_dict: Union[str, Dict[str, torch.Tensor]],
        weight_name: str,
        subfolder: str = '',
        trigger_word: str = 'img',
        pm_version: str = 'v2',
        kwargs,
    ):