***Tencent ARC train sd1v5 **
model.cuda()
model.eval() # model is contain all models vae ,cliptext
return model
***Tencent ARC train sdxl **
vae.requiresgrad(False)
text_encoder_one.requiresgrad(False)
text_encoder_two.requiresgrad(False) -> the Unet does not set no grad means Unet need grad
Describe the bug
Why are the UNet parameters frozen during training for SD1v5, but not for SDXL? the haggingface training sdxl script sets " Unet.train() "
Reproduction
diffusers : 0.28 * huggingface train sdxl ***** vae.requiresgrad(False) text_encoder_one.requiresgrad(False) text_encoder_two.requiresgrad(False) t2iadapter.train() unet.train()
***Tencent ARC train sd1v5 ** model.cuda() model.eval() # model is contain all models vae ,cliptext return model
***Tencent ARC train sdxl ** vae.requiresgrad(False) text_encoder_one.requiresgrad(False) text_encoder_two.requiresgrad(False) -> the Unet does not set no grad means Unet need grad
Logs
No response
System Info
Ubuntu
Who can help?
No response