Open YuzhiChen001 opened 2 months ago
I am currently unfamiliar with building a pipeline with lora, and considering it was not a prioritized feature at the moment this project was done, I may not keep all necessary lines of code from the original pipeline to support lora I guess?
If you want to modify it to support lora by yourself, I suggest to first check the original controlnet pipeline to identify the lora-related lines in call function, and check is there is a corresponding one in the current pipeline. If all lines are included, maybe it is related to the version issue you mentioned. I use diffuser 0.19.3 because I modified the attention processor based on the implementation of that specific diffuser version. Disabling the self-attention reuse or re-implement that processor for new versions will get rid of the version limitation.
hi,have you tried to add lora to the pipline?
i add
pipe.load_lora_weights("./data/lora/", weight_name="CGgame_building_nsw.safetensors")
beforesyncmvd = StableSyncMVDPipeline(**pipe.components)
and addcross_attention_kwargs={"scale": 0.8}
inresult_tex_rgb, textured_views, v = syncmvd(......cross_attention_kwargs={"scale": 0.8}......)
.but encountered the following bug:
TypeError: __call__() got an unexpected keyword argument 'scale'
later,I found that this has nothing to do with lora.
although
cross_attention_kwargs: Optional[Dict[str, Any]] = None,
in function __call__().but
cross_attention_kwargs={"scale": 0.8}
will fail withTypeError: __call__() got an unexpected keyword argument 'scale'
.i guess it maybe because xformer0.0.20 unmatch the diffusers0.19.3,did you have some idea to solve it? any help will be appreciated.