I have some questions about the structure of SPM. From the paper and codes, is an SPM adapter essentially a Lora model?
And when erasing multiple concepts, is Facilitated Transport similar to Lora merge (weighted sum of Lora model parameters)? When loading SPM, is only one weighted SPM parameter loaded, not multiple SPM parameters?
Is it just that Facilitated Transport calculates an adaptive weight based on the similarity of the prompt and the erased concepts?
About structure: SPM, in fact, can borrow any adapter design to achieve its performance. LoKr, LoHa, etc., with our proposed losses should also perform well.
About Facilitated Transport (FT): it's similar to a dynamic lora merge, with each weight of loras being adjusted online. All the parameters of SPMs should be loaded for erasing multiple concepts.
Hi, thanks for this wonderful work!
I have some questions about the structure of SPM. From the paper and codes, is an SPM adapter essentially a Lora model?
And when erasing multiple concepts, is Facilitated Transport similar to Lora merge (weighted sum of Lora model parameters)? When loading SPM, is only one weighted SPM parameter loaded, not multiple SPM parameters?
Is it just that Facilitated Transport calculates an adaptive weight based on the similarity of the prompt and the erased concepts?
Thank you very much!