muzishen / IMAGDressing

👔IMAGDressing👔: Interactive Modular Apparel Generation for Virtual Dressing. It enables customizable human image generation with flexible garment, pose, and scene control, ensuring high fidelity and garment consistency for virtual try-on.
https://imagdressing.github.io/
Apache License 2.0
1.03k stars 85 forks source link

Some questions on `inference_IMAGdressing_controlnetinpainting.py` #7

Closed ipfans closed 4 months ago

ipfans commented 4 months ago

Thanks for your great work. I ran into some confusion when trying inference_IMAGdressing_controlnetinpainting.py file.

WhiteJiang commented 4 months ago

Thank you for your attention. VTON's ckpt/humanparsing and ckpt/openpose need to be downloaded. For IP-Adapter, we used its models/image_encoder, and the corresponding config.json is also in this directory.

ipfans commented 4 months ago

@WhiteJiang thank you for your reply. For ip-adapter, I think we can add subfolder="models/image_encoder" to avoid confusion.

WhiteJiang commented 4 months ago

@ipfans Thank you for your suggestion, we will make changes to avoid confusion.

ipfans commented 4 months ago

@WhiteJiang also, this project deps seems broken on this file:

Traceback (most recent call last):
  File "/usr/lib/python3.10/runpy.py", line 196, in _run_module_as_main
    return _run_code(code, main_globals, None,
  File "/usr/lib/python3.10/runpy.py", line 86, in _run_code
    exec(code, run_globals)
  File "/home/ubuntu/imagd/inference_IMAGdressing_controlnetinpainting.py", line 264, in <module>
    output = pipe(
  File "/home/ubuntu/imagd/.venv/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "/home/ubuntu/imagd/dressing_sd/pipelines/IMAGDressing_v1_pipeline_controlnet_inpainting.py", line 262, in __call__
    control_image = self.prepare_control_image(
  File "/home/ubuntu/imagd/dressing_sd/pipelines/IMAGDressing_v1_pipeline_controlnet_inpainting.py", line 95, in prepare_control_image
    image = self.control_image_processor.preprocess(
TypeError: VaeImageProcessor.preprocess() got an unexpected keyword argument 'crops_coord

I am currently using version 0.24.0 of diffusers, which is the same as in the requirements.txt file。

pip list |grep diffusers
diffusers                0.24.0
WhiteJiang commented 4 months ago

@ipfans This is a bug caused by the diffuser version. We solved the problem after upgrading to version 0.26.3

ipfans commented 4 months ago

Oh, finally it works... thank you guys