Xiaojiu-z / Stable-Makeup

Pytorch Implementation of "Stable-Makeup: When Real-World Makeup Transfer Meets Diffusion Model"
Apache License 2.0
114 stars 10 forks source link

About makeup_encoder.generate #7

Closed LvYangming closed 3 months ago

LvYangming commented 3 months ago

I have configured the environment and downloaded the corresponding model, but I still get an error! SPIGA model loaded! SPIGA model loaded! /home/paperspace/anaconda3/envs/myenv/lib/python3.8/site-packages/huggingface_hub/file_download.py:1132: FutureWarning: resume_download is deprecated and will be removed in version 1.0.0. Downloads always resume when possible. If you want to force a new download, use force_download=True. warnings.warn( /home/paperspace/anaconda3/envs/myenv/lib/python3.8/site-packages/torch/_utils.py:776: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() return self.fget.get(instance, owner)() Loading pipeline components...: 100%|█████████████████████████████████████████████████████████████████████████| 6/6 [00:00<00:00, 21.40it/s] You have disabled the safety checker for <class 'diffusers.pipelines.controlnet.pipeline_controlnet.StableDiffusionControlNetPipeline'> by passing safety_checker=None. Ensure that you abide to the conditions of the Stable Diffusion license and do not expose unfiltered results in services or applications open to the public. Both the diffusers team and Hugging Face strongly recommend to keep the safety filter enabled in all public facing circumstances, disabling it only for use-cases that involve analyzing network behavior or auditing its results. For more information, please have a look at https://github.com/huggingface/diffusers/pull/254 . /home/paperspace/anaconda3/envs/myenv/lib/python3.8/site-packages/torch/nn/functional.py:4298: UserWarning: Default grid_sample and affine_grid behavior has changed to align_corners=False since 1.3.0. Please specify align_corners=True if the old behavior is desired. See the documentation of grid_sample for details. warnings.warn( /home/paperspace/anaconda3/envs/myenv/lib/python3.8/site-packages/torch/nn/functional.py:4236: UserWarning: Default grid_sample and affine_grid behavior has changed to align_corners=False since 1.3.0. Please specify align_corners=True if the old behavior is desired. See the documentation of grid_sample for details. warnings.warn( Traceback (most recent call last): File "infer_kps.py", line 99, in infer() File "infer_kps.py", line 94, in infer result_img = makeup_encoder.generate(id_image=[id_image, pose_image], makeup_image=makeup_image, File "/home/paperspace/workspace/Stable-Makeup/detail_encoder/encoder_plus.py", line 103, in generate image = pipe( File "/home/paperspace/anaconda3/envs/myenv/lib/python3.8/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context return func(*args, kwargs) File "/home/paperspace/workspace/Stable-Makeup/diffusers/pipelines/controlnet/pipeline_controlnet.py", line 1010, in call down_block_res_samples, mid_block_res_sample = self.controlnet( File "/home/paperspace/anaconda3/envs/myenv/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(*args, *kwargs) File "/home/paperspace/workspace/Stable-Makeup/diffusers/pipelines/controlnet/multicontrolnet.py", line 48, in forward down_samples, mid_sample = controlnet( File "/home/paperspace/anaconda3/envs/myenv/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(args, kwargs) File "/home/paperspace/workspace/Stable-Makeup/diffusers/models/controlnet.py", line 783, in forward sample, res_samples = downsample_block( File "/home/paperspace/anaconda3/envs/myenv/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(*args, kwargs) File "/home/paperspace/workspace/Stable-Makeup/diffusers/models/unet_2d_blocks.py", line 1160, in forward hidden_states = attn( File "/home/paperspace/anaconda3/envs/myenv/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(*args, *kwargs) File "/home/paperspace/workspace/Stable-Makeup/diffusers/models/transformer_2d.py", line 375, in forward hidden_states = block( File "/home/paperspace/anaconda3/envs/myenv/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(args, kwargs) File "/home/paperspace/workspace/Stable-Makeup/diffusers/models/attention.py", line 293, in forward attn_output = self.attn2( File "/home/paperspace/anaconda3/envs/myenv/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(*args, kwargs) File "/home/paperspace/workspace/Stable-Makeup/diffusers/models/attention_processor.py", line 522, in forward return self.processor( File "/home/paperspace/workspace/Stable-Makeup/diffusers/models/attention_processor.py", line 1218, in call key = attn.to_k(encoder_hidden_states, args) File "/home/paperspace/anaconda3/envs/myenv/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(args, kwargs) File "/home/paperspace/workspace/Stable-Makeup/diffusers/models/lora.py", line 300, in forward out = super().forward(hidden_states) File "/home/paperspace/anaconda3/envs/myenv/lib/python3.8/site-packages/torch/nn/modules/linear.py", line 114, in forward return F.linear(input, self.weight, self.bias) RuntimeError: mat1 and mat2 shapes cannot be multiplied (6168x1024 and 768x320)

Xiaojiu-z commented 3 months ago

I have configured the environment and downloaded the corresponding model, but I still get an error! SPIGA model loaded! SPIGA model loaded! /home/paperspace/anaconda3/envs/myenv/lib/python3.8/site-packages/huggingface_hub/file_download.py:1132: FutureWarning: resume_download is deprecated and will be removed in version 1.0.0. Downloads always resume when possible. If you want to force a new download, use force_download=True. warnings.warn( /home/paperspace/anaconda3/envs/myenv/lib/python3.8/site-packages/torch/_utils.py:776: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() return self.fget.get(instance, owner)() Loading pipeline components...: 100%|█████████████████████████████████████████████████████████████████████████| 6/6 [00:00<00:00, 21.40it/s] You have disabled the safety checker for <class 'diffusers.pipelines.controlnet.pipeline_controlnet.StableDiffusionControlNetPipeline'> by passing safety_checker=None. Ensure that you abide to the conditions of the Stable Diffusion license and do not expose unfiltered results in services or applications open to the public. Both the diffusers team and Hugging Face strongly recommend to keep the safety filter enabled in all public facing circumstances, disabling it only for use-cases that involve analyzing network behavior or auditing its results. For more information, please have a look at huggingface/diffusers#254 . /home/paperspace/anaconda3/envs/myenv/lib/python3.8/site-packages/torch/nn/functional.py:4298: UserWarning: Default grid_sample and affine_grid behavior has changed to align_corners=False since 1.3.0. Please specify align_corners=True if the old behavior is desired. See the documentation of grid_sample for details. warnings.warn( /home/paperspace/anaconda3/envs/myenv/lib/python3.8/site-packages/torch/nn/functional.py:4236: UserWarning: Default grid_sample and affine_grid behavior has changed to align_corners=False since 1.3.0. Please specify align_corners=True if the old behavior is desired. See the documentation of grid_sample for details. warnings.warn( Traceback (most recent call last): File "infer_kps.py", line 99, in infer() File "infer_kps.py", line 94, in infer result_img = makeup_encoder.generate(id_image=[id_image, pose_image], makeup_image=makeup_image, File "/home/paperspace/workspace/Stable-Makeup/detail_encoder/encoder_plus.py", line 103, in generate image = pipe( File "/home/paperspace/anaconda3/envs/myenv/lib/python3.8/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context return func(*args, kwargs) File "/home/paperspace/workspace/Stable-Makeup/diffusers/pipelines/controlnet/pipeline_controlnet.py", line 1010, in call down_block_res_samples, mid_block_res_sample = self.controlnet( File "/home/paperspace/anaconda3/envs/myenv/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(*args, *kwargs) File "/home/paperspace/workspace/Stable-Makeup/diffusers/pipelines/controlnet/multicontrolnet.py", line 48, in forward down_samples, mid_sample = controlnet( File "/home/paperspace/anaconda3/envs/myenv/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(args, kwargs) File "/home/paperspace/workspace/Stable-Makeup/diffusers/models/controlnet.py", line 783, in forward sample, res_samples = downsample_block( File "/home/paperspace/anaconda3/envs/myenv/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(*args, kwargs) File "/home/paperspace/workspace/Stable-Makeup/diffusers/models/unet_2d_blocks.py", line 1160, in forward hidden_states = attn( File "/home/paperspace/anaconda3/envs/myenv/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(*args, *kwargs) File "/home/paperspace/workspace/Stable-Makeup/diffusers/models/transformer_2d.py", line 375, in forward hidden_states = block( File "/home/paperspace/anaconda3/envs/myenv/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(args, kwargs) File "/home/paperspace/workspace/Stable-Makeup/diffusers/models/attention.py", line 293, in forward attn_output = self.attn2( File "/home/paperspace/anaconda3/envs/myenv/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(*args, kwargs) File "/home/paperspace/workspace/Stable-Makeup/diffusers/models/attention_processor.py", line 522, in forward return self.processor( File "/home/paperspace/workspace/Stable-Makeup/diffusers/models/attention_processor.py", line 1218, in call key = attn.to_k(encoder_hidden_states, args) File "/home/paperspace/anaconda3/envs/myenv/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(args, kwargs) File "/home/paperspace/workspace/Stable-Makeup/diffusers/models/lora.py", line 300, in forward out = super().forward(hidden_states) File "/home/paperspace/anaconda3/envs/myenv/lib/python3.8/site-packages/torch/nn/modules/linear.py", line 114, in forward return F.linear(input, self.weight, self.bias) RuntimeError: mat1 and mat2 shapes cannot be multiplied (6168x1024 and 768x320)

It seems that some errors happened in the Makeup cross-attention layers......maybe you have changed some code.

LvYangming commented 3 months ago

I have configured the environment and downloaded the corresponding model, but I still get an error! SPIGA model loaded! SPIGA model loaded! /home/paperspace/anaconda3/envs/myenv/lib/python3.8/site-packages/huggingface_hub/file_download.py:1132: FutureWarning: resume_download is deprecated and will be removed in version 1.0.0. Downloads always resume when possible. If you want to force a new download, use force_download=True. warnings.warn( /home/paperspace/anaconda3/envs/myenv/lib/python3.8/site-packages/torch/_utils.py:776: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() return self.fget.get(instance, owner)() Loading pipeline components...: 100%|█████████████████████████████████████████████████████████████████████████| 6/6 [00:00<00:00, 21.40it/s] You have disabled the safety checker for <class 'diffusers.pipelines.controlnet.pipeline_controlnet.StableDiffusionControlNetPipeline'> by passing safety_checker=None. Ensure that you abide to the conditions of the Stable Diffusion license and do not expose unfiltered results in services or applications open to the public. Both the diffusers team and Hugging Face strongly recommend to keep the safety filter enabled in all public facing circumstances, disabling it only for use-cases that involve analyzing network behavior or auditing its results. For more information, please have a look at huggingface/diffusers#254 . /home/paperspace/anaconda3/envs/myenv/lib/python3.8/site-packages/torch/nn/functional.py:4298: UserWarning: Default grid_sample and affine_grid behavior has changed to align_corners=False since 1.3.0. Please specify align_corners=True if the old behavior is desired. See the documentation of grid_sample for details. warnings.warn( /home/paperspace/anaconda3/envs/myenv/lib/python3.8/site-packages/torch/nn/functional.py:4236: UserWarning: Default grid_sample and affine_grid behavior has changed to align_corners=False since 1.3.0. Please specify align_corners=True if the old behavior is desired. See the documentation of grid_sample for details. warnings.warn( Traceback (most recent call last): File "infer_kps.py", line 99, in infer() File "infer_kps.py", line 94, in infer result_img = makeup_encoder.generate(id_image=[id_image, pose_image], makeup_image=makeup_image, File "/home/paperspace/workspace/Stable-Makeup/detail_encoder/encoder_plus.py", line 103, in generate image = pipe( File "/home/paperspace/anaconda3/envs/myenv/lib/python3.8/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context return func(*args, kwargs) File "/home/paperspace/workspace/Stable-Makeup/diffusers/pipelines/controlnet/pipeline_controlnet.py", line 1010, in call down_block_res_samples, mid_block_res_sample = self.controlnet( File "/home/paperspace/anaconda3/envs/myenv/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(*args, *kwargs) File "/home/paperspace/workspace/Stable-Makeup/diffusers/pipelines/controlnet/multicontrolnet.py", line 48, in forward down_samples, mid_sample = controlnet( File "/home/paperspace/anaconda3/envs/myenv/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(args, kwargs) File "/home/paperspace/workspace/Stable-Makeup/diffusers/models/controlnet.py", line 783, in forward sample, res_samples = downsample_block( File "/home/paperspace/anaconda3/envs/myenv/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(*args, kwargs) File "/home/paperspace/workspace/Stable-Makeup/diffusers/models/unet_2d_blocks.py", line 1160, in forward hidden_states = attn( File "/home/paperspace/anaconda3/envs/myenv/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(*args, *kwargs) File "/home/paperspace/workspace/Stable-Makeup/diffusers/models/transformer_2d.py", line 375, in forward hidden_states = block( File "/home/paperspace/anaconda3/envs/myenv/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(args, kwargs) File "/home/paperspace/workspace/Stable-Makeup/diffusers/models/attention.py", line 293, in forward attn_output = self.attn2( File "/home/paperspace/anaconda3/envs/myenv/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(*args, kwargs) File "/home/paperspace/workspace/Stable-Makeup/diffusers/models/attention_processor.py", line 522, in forward return self.processor( File "/home/paperspace/workspace/Stable-Makeup/diffusers/models/attention_processor.py", line 1218, in call key = attn.to_k(encoder_hidden_states, args) File "/home/paperspace/anaconda3/envs/myenv/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(args, kwargs) File "/home/paperspace/workspace/Stable-Makeup/diffusers/models/lora.py", line 300, in forward out = super().forward(hidden_states) File "/home/paperspace/anaconda3/envs/myenv/lib/python3.8/site-packages/torch/nn/modules/linear.py", line 114, in forward return F.linear(input, self.weight, self.bias) RuntimeError: mat1 and mat2 shapes cannot be multiplied (6168x1024 and 768x320)

It seems that some errors happened in the Makeup cross-attention layers......maybe you have changed some code.

Yes,I made a few changes! 1.image I can't run it without modifying it like this 2. image Here are two models, both of which I got from huggingface sdv1-5 Download from https://huggingface.co/runwayml/stable-diffusion-v1-5 image_encoder_l Download from https://huggingface.co/openai/clip-vit-large-patch14/tree/main And I just download the (pytorch_model.bin,config.json) in "./models/image_encoder_l" Can you help me figure out which modification caused this problem? Thanks @Xiaojiu-z !

Xiaojiu-z commented 3 months ago

I have configured the environment and downloaded the corresponding model, but I still get an error! SPIGA model loaded! SPIGA model loaded! /home/paperspace/anaconda3/envs/myenv/lib/python3.8/site-packages/huggingface_hub/file_download.py:1132: FutureWarning: resume_download is deprecated and will be removed in version 1.0.0. Downloads always resume when possible. If you want to force a new download, use force_download=True. warnings.warn( /home/paperspace/anaconda3/envs/myenv/lib/python3.8/site-packages/torch/_utils.py:776: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() return self.fget.get(instance, owner)() Loading pipeline components...: 100%|█████████████████████████████████████████████████████████████████████████| 6/6 [00:00<00:00, 21.40it/s] You have disabled the safety checker for <class 'diffusers.pipelines.controlnet.pipeline_controlnet.StableDiffusionControlNetPipeline'> by passing safety_checker=None. Ensure that you abide to the conditions of the Stable Diffusion license and do not expose unfiltered results in services or applications open to the public. Both the diffusers team and Hugging Face strongly recommend to keep the safety filter enabled in all public facing circumstances, disabling it only for use-cases that involve analyzing network behavior or auditing its results. For more information, please have a look at huggingface/diffusers#254 . /home/paperspace/anaconda3/envs/myenv/lib/python3.8/site-packages/torch/nn/functional.py:4298: UserWarning: Default grid_sample and affine_grid behavior has changed to align_corners=False since 1.3.0. Please specify align_corners=True if the old behavior is desired. See the documentation of grid_sample for details. warnings.warn( /home/paperspace/anaconda3/envs/myenv/lib/python3.8/site-packages/torch/nn/functional.py:4236: UserWarning: Default grid_sample and affine_grid behavior has changed to align_corners=False since 1.3.0. Please specify align_corners=True if the old behavior is desired. See the documentation of grid_sample for details. warnings.warn( Traceback (most recent call last): File "infer_kps.py", line 99, in infer() File "infer_kps.py", line 94, in infer result_img = makeup_encoder.generate(id_image=[id_image, pose_image], makeup_image=makeup_image, File "/home/paperspace/workspace/Stable-Makeup/detail_encoder/encoder_plus.py", line 103, in generate image = pipe( File "/home/paperspace/anaconda3/envs/myenv/lib/python3.8/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context return func(*args, kwargs) File "/home/paperspace/workspace/Stable-Makeup/diffusers/pipelines/controlnet/pipeline_controlnet.py", line 1010, in call down_block_res_samples, mid_block_res_sample = self.controlnet( File "/home/paperspace/anaconda3/envs/myenv/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(*args, *kwargs) File "/home/paperspace/workspace/Stable-Makeup/diffusers/pipelines/controlnet/multicontrolnet.py", line 48, in forward down_samples, mid_sample = controlnet( File "/home/paperspace/anaconda3/envs/myenv/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(args, kwargs) File "/home/paperspace/workspace/Stable-Makeup/diffusers/models/controlnet.py", line 783, in forward sample, res_samples = downsample_block( File "/home/paperspace/anaconda3/envs/myenv/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(*args, kwargs) File "/home/paperspace/workspace/Stable-Makeup/diffusers/models/unet_2d_blocks.py", line 1160, in forward hidden_states = attn( File "/home/paperspace/anaconda3/envs/myenv/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(*args, *kwargs) File "/home/paperspace/workspace/Stable-Makeup/diffusers/models/transformer_2d.py", line 375, in forward hidden_states = block( File "/home/paperspace/anaconda3/envs/myenv/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(args, kwargs) File "/home/paperspace/workspace/Stable-Makeup/diffusers/models/attention.py", line 293, in forward attn_output = self.attn2( File "/home/paperspace/anaconda3/envs/myenv/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(*args, kwargs) File "/home/paperspace/workspace/Stable-Makeup/diffusers/models/attention_processor.py", line 522, in forward return self.processor( File "/home/paperspace/workspace/Stable-Makeup/diffusers/models/attention_processor.py", line 1218, in call key = attn.to_k(encoder_hidden_states, args) File "/home/paperspace/anaconda3/envs/myenv/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(args, kwargs) File "/home/paperspace/workspace/Stable-Makeup/diffusers/models/lora.py", line 300, in forward out = super().forward(hidden_states) File "/home/paperspace/anaconda3/envs/myenv/lib/python3.8/site-packages/torch/nn/modules/linear.py", line 114, in forward return F.linear(input, self.weight, self.bias) RuntimeError: mat1 and mat2 shapes cannot be multiplied (6168x1024 and 768x320)

It seems that some errors happened in the Makeup cross-attention layers......maybe you have changed some code.

Yes,I made a few changes! 1.image I can't run it without modifying it like this 2. image Here are two models, both of which I got from huggingface sdv1-5 Download from https://huggingface.co/runwayml/stable-diffusion-v1-5 image_encoder_l Download from https://huggingface.co/openai/clip-vit-large-patch14/tree/main And I just download the (pytorch_model.bin,config.json) in "./models/image_encoder_l" Can you help me figure out which modification caused this problem? Thanks @Xiaojiu-z !

🤔You can not change the pipeline that is imported from utils.pipeline_sd15. Because I have changed some code in the pipeline. Maybe you can follow this or WebUI and try it again. Following his steps should be successful.

LvYangming commented 3 months ago

I have configured the environment and downloaded the corresponding model, but I still get an error! SPIGA model loaded! SPIGA model loaded! /home/paperspace/anaconda3/envs/myenv/lib/python3.8/site-packages/huggingface_hub/file_download.py:1132: FutureWarning: resume_download is deprecated and will be removed in version 1.0.0. Downloads always resume when possible. If you want to force a new download, use force_download=True. warnings.warn( /home/paperspace/anaconda3/envs/myenv/lib/python3.8/site-packages/torch/_utils.py:776: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() return self.fget.get(instance, owner)() Loading pipeline components...: 100%|█████████████████████████████████████████████████████████████████████████| 6/6 [00:00<00:00, 21.40it/s] You have disabled the safety checker for <class 'diffusers.pipelines.controlnet.pipeline_controlnet.StableDiffusionControlNetPipeline'> by passing safety_checker=None. Ensure that you abide to the conditions of the Stable Diffusion license and do not expose unfiltered results in services or applications open to the public. Both the diffusers team and Hugging Face strongly recommend to keep the safety filter enabled in all public facing circumstances, disabling it only for use-cases that involve analyzing network behavior or auditing its results. For more information, please have a look at huggingface/diffusers#254 . /home/paperspace/anaconda3/envs/myenv/lib/python3.8/site-packages/torch/nn/functional.py:4298: UserWarning: Default grid_sample and affine_grid behavior has changed to align_corners=False since 1.3.0. Please specify align_corners=True if the old behavior is desired. See the documentation of grid_sample for details. warnings.warn( /home/paperspace/anaconda3/envs/myenv/lib/python3.8/site-packages/torch/nn/functional.py:4236: UserWarning: Default grid_sample and affine_grid behavior has changed to align_corners=False since 1.3.0. Please specify align_corners=True if the old behavior is desired. See the documentation of grid_sample for details. warnings.warn( Traceback (most recent call last): File "infer_kps.py", line 99, in infer() File "infer_kps.py", line 94, in infer result_img = makeup_encoder.generate(id_image=[id_image, pose_image], makeup_image=makeup_image, File "/home/paperspace/workspace/Stable-Makeup/detail_encoder/encoder_plus.py", line 103, in generate image = pipe( File "/home/paperspace/anaconda3/envs/myenv/lib/python3.8/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context return func(*args, kwargs) File "/home/paperspace/workspace/Stable-Makeup/diffusers/pipelines/controlnet/pipeline_controlnet.py", line 1010, in call down_block_res_samples, mid_block_res_sample = self.controlnet( File "/home/paperspace/anaconda3/envs/myenv/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(*args, *kwargs) File "/home/paperspace/workspace/Stable-Makeup/diffusers/pipelines/controlnet/multicontrolnet.py", line 48, in forward down_samples, mid_sample = controlnet( File "/home/paperspace/anaconda3/envs/myenv/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(args, kwargs) File "/home/paperspace/workspace/Stable-Makeup/diffusers/models/controlnet.py", line 783, in forward sample, res_samples = downsample_block( File "/home/paperspace/anaconda3/envs/myenv/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(*args, kwargs) File "/home/paperspace/workspace/Stable-Makeup/diffusers/models/unet_2d_blocks.py", line 1160, in forward hidden_states = attn( File "/home/paperspace/anaconda3/envs/myenv/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(*args, *kwargs) File "/home/paperspace/workspace/Stable-Makeup/diffusers/models/transformer_2d.py", line 375, in forward hidden_states = block( File "/home/paperspace/anaconda3/envs/myenv/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(args, kwargs) File "/home/paperspace/workspace/Stable-Makeup/diffusers/models/attention.py", line 293, in forward attn_output = self.attn2( File "/home/paperspace/anaconda3/envs/myenv/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(*args, kwargs) File "/home/paperspace/workspace/Stable-Makeup/diffusers/models/attention_processor.py", line 522, in forward return self.processor( File "/home/paperspace/workspace/Stable-Makeup/diffusers/models/attention_processor.py", line 1218, in call key = attn.to_k(encoder_hidden_states, args) File "/home/paperspace/anaconda3/envs/myenv/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(args, kwargs) File "/home/paperspace/workspace/Stable-Makeup/diffusers/models/lora.py", line 300, in forward out = super().forward(hidden_states) File "/home/paperspace/anaconda3/envs/myenv/lib/python3.8/site-packages/torch/nn/modules/linear.py", line 114, in forward return F.linear(input, self.weight, self.bias) RuntimeError: mat1 and mat2 shapes cannot be multiplied (6168x1024 and 768x320)

It seems that some errors happened in the Makeup cross-attention layers......maybe you have changed some code.

Yes,I made a few changes! 1.image I can't run it without modifying it like this 2. image Here are two models, both of which I got from huggingface sdv1-5 Download from https://huggingface.co/runwayml/stable-diffusion-v1-5 image_encoder_l Download from https://huggingface.co/openai/clip-vit-large-patch14/tree/main And I just download the (pytorch_model.bin,config.json) in "./models/image_encoder_l" Can you help me figure out which modification caused this problem? Thanks @Xiaojiu-z !

🤔You can not change the pipeline that is imported from utils.pipeline_sd15. Because I have changed some code in the pipeline. Maybe you can follow this or WebUI and try it again. Following his steps should be successful.

Thanks @Xiaojiu-z ,It work successful and the effect is amazing, it's a great work, thank you for your contribution