rohitgandikota / sliders

Concept Sliders for Precise Control of Diffusion Models
https://sliders.baulab.info
MIT License
968 stars 76 forks source link

Quetion about training_method #55

Closed PotatoBananaApple closed 10 months ago

PotatoBananaApple commented 10 months ago

I have been trying to find information on the training methods: noxattn, innoxattn, selfattn, xattn, full, xattn-strict, noxattn-hspace, noxattn-hspace-last

But i can't just quite find any information on how each of them affects the training. If you could pinpoint to resource or briefly explain how does it affect the slider training i would be really happy!

rohitgandikota commented 10 months ago

All the models in our paper are trained on default setting noxattn which only attaches LoRAs on all layers except cross attention layers in the UNET.

Similarly following our previous work "Erasing Concepts in Diffusion Models" we released some other setups

The remaining are experimental which are not well tested

PotatoBananaApple commented 10 months ago

All the models in our paper are trained on default setting noxattn which only attaches LoRAs on all layers except cross attention layers in the UNET.

Similarly following our previous work "Erasing Concepts in Diffusion Models" we released some other setups

* `xattn` means sliders are attached only on cross attention layers

* `full` is for all layers in the UNET

* `selfattn` is for self attention layers

The remaining are experimental which are not well tested

Thank you for answer! Gotta test around different settings!