garibida / cross-image-attention

Officail Implementation for "Cross-Image Attention for Zero-Shot Appearance Transfer"
https://garibida.github.io/cross-image-attention/
MIT License
335 stars 22 forks source link

Bugs when setting "use_masked_adain = False" #6

Closed FerryHuang closed 11 months ago

FerryHuang commented 11 months ago

Thanks for your great works! I was trying to find out how AdaIn trick works in the alg and launch the run.py with setting the use_masked_adain = False in config. And it reported AttributeError: 'Segmentor' object has no attribute 'self_attention_32' at sampling step20. I corrected this line with callback=model.get_adain_callback if cfg.use_masked_adain else None, and the run.py finished but it seemed that the content image contributed nothing to the synthesis image. e.g. image Please let me know if I overlooked any important details!

yuval-alaluf commented 11 months ago

It seems like you completely removed the use of the AdaIN by removing the callback, so this is something you'll want to keep. But now I did noticed that I removed the use of the non-masked AdaIN during the refactor :( I'll try uploading a solution later today. Thanks for point this out :)

yuval-alaluf commented 11 months ago

Sorry for the delayed response, but I have just pushed a fix for running without masked adain (i.e., running with regular adain on the entire latent codes). Hope this helps, and thanks again for pointing this out!