I have some questions regarding the parameterization "v" in the context of training and using LoRA (Low-Rank Adaptation) with a 512-base model. Here are my observations:
Whether I use the "v_parameterization" or not during LoRA training seems to have no impact when loading the LoRA.
When training the 512-base model without loading LoRA, it requires a non-"v_parameterization" configuration to function correctly. But when loading LoRA into the 512-base model, it must use the "v_parameterization"; otherwise, the generated images are just noise.
Could you please provide some insights or explanations for these observations?
Thank you for your assistance!
Hello,
I have some questions regarding the parameterization "v" in the context of training and using LoRA (Low-Rank Adaptation) with a 512-base model. Here are my observations:
Whether I use the "v_parameterization" or not during LoRA training seems to have no impact when loading the LoRA. When training the 512-base model without loading LoRA, it requires a non-"v_parameterization" configuration to function correctly. But when loading LoRA into the 512-base model, it must use the "v_parameterization"; otherwise, the generated images are just noise. Could you please provide some insights or explanations for these observations? Thank you for your assistance!