Open kjhenner opened 2 years ago
Ideally there would be a converter + ignoring mismatching inputs/outputs in case the model has a different amount of channels
The wrappers have the loss function implemented which should let you fine-tune but you will have to modify train.py for it.
Is there currently a way to fine-tune an existing model with k-diffusion? And/or are there any existing large pre-trained models that would be compatible with the patch feature?