haofanwang / ControlNet-for-Diffusers

Transfer the ControlNet with any basemodel in diffusersπŸ”₯
MIT License
813 stars 48 forks source link

Please help! #33

Open randompaga opened 1 year ago

randompaga commented 1 year ago

python ./scripts/convert_controlnet_to_diffusers.py --checkpoint_path control_any3_openpose.pth --dump_path control_any3_openpose --device cpu

Did not find file convert_controlnet_to_diffusers.py

haofanwang commented 1 year ago

Please specify the commit ID @randompaga .

git clone https://github.com/takuma104/diffusers.git
cd diffusers
git checkout 9a37409663a53f775fa380db332d37d7ea75c915
randompaga commented 1 year ago

Please specify the commit ID @randompaga .

git clone https://github.com/takuma104/diffusers.git
cd diffusers
git checkout 9a37409663a53f775fa380db332d37d7ea75c915

Yea,it's worked.

But another erro occured.

AC17C019-DDD4-4FE4-A06A-5159EFAA75A0

Please help!

haofanwang commented 1 year ago

How do you install diffusers? You should run following command to install it, instead of pip install diffusers. @randompaga

pip install .

Surely, diffusers recently has supported this officially, you can refer to their codebase if you need the newest version of diffusers, I will also update this tutorial soon. Anyway, if you just want to convert, this still works.

randompaga commented 1 year ago

How do you install diffusers? You should run following command to install it, instead of pip install diffusers. @randompaga

pip install .

Surely, diffusers recently has supported this officially, you can refer to their codebase if you need the newest version of diffusers, I will also update this tutorial soon. Anyway, if you just want to convert, this still works.

Very thanks!I can generate image.But result seems get some wrong. What i input : c5a08c8dea47ec3946a1ec9e8231d1bbddc7391e

Prompt:1girl

What output: generated

I used the file https://huggingface.co/toyxyz/Control_any3/resolve/main/control_any3_openpose.pth which convert to diffusers.

root@VM-0-7-ubuntu:/home/ubuntu/diffusers# python ./scripts/convert_controlnet_to_diffusers.py --checkpoint_path /home/ubuntu/ControlNet/models/control_any3_openpose.pth --dump_path control_any3_openpose --device cpu global_step key not found in model Some weights of the model checkpoint at openai/clip-vit-large-patch14 were not used when initializing CLIPTextModel: ['vision_model.encoder.layers.16.mlp.fc1.weight',

root@VM-0-7-ubuntu:/home/ubuntu/diffusers# python test2.py /usr/local/lib/python3.8/dist-packages/transformers/models/clip/feature_extraction_clip.py:28: FutureWarning: The class CLIPFeatureExtractor is deprecated and will be removed in version 5 of Transformers. Please use CLIPImageProcessor instead. warnings.warn( 16%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ

Thanks for your help.

haofanwang commented 1 year ago

@randompaga How do you infer? Can you attach your code?