Open zethfoxster opened 1 year ago
same problem here!
a solution is to combine a model that has a working unet at a very very small ratio as model A...and this as model broken one at model B....it keeps the working unet
same here, The training script does not require a vae to be merged, and each checkpoint is BROKEN after the training, I wonder how this came about.
Im guessing its a checkpoint that only contains a UNET, If thats the case you need to get a VAE and CLIP from an intact checkpoint.
Load the intact checkpoint, then import the UNET from your broken checkpoint, then save it under a new name. (Similar procedure to replacing a VAE, but its the UNET). Should work.
Im guessing its a checkpoint that only contains a UNET, If thats the case you need to get a VAE and CLIP from an intact checkpoint.
Load the intact checkpoint, then import the UNET from your broken checkpoint, then save it under a new name. (Similar procedure to replacing a VAE, but its the UNET). Should work.
Your suggestion is correct, I got UNET and VAE from the checkpoint after training instead of using the VAE from base model, but it still works.
Whether using VAE in the base model or the BROKEN checkpoint, the final result has the same hash value. So.. maybe it didn't include a VAE
I know you can replace a vae, but is there any way to fix this?