Open YouPassTheButter opened 1 year ago
Hi, @VSAnimator
I'd like to know about training textual inversion too. In particular, I'd appreciate details regarding the FineTuneConcept
class shown in the notebook. Thanks!
Maybe we can follow this demo to train a Textual Inversion model with a size of [1024]
:
https://github.com/huggingface/diffusers/tree/main/examples/textual_inversion
The textual inversion used in FTC seems to be different original textual_inversion, I used the textual inversion code base to learn but it would take a lot of time for each layer. Here in the assets shared, it seems like they are not learning a new token - but instead a modifier on top of layer token. I am not sure how this is being done but seems very cool thing to try out
Hi, great job! Would you mind sharing the code or pipeline for training the textual inversion embedding?