Open DarkAlchy opened 1 year ago
Training an embedding requires VRAM so does the unet, so I don't see how it would require less resources.
Training an embedding requires VRAM so does the unet, so I don't see how it would require less resources.
We just @ you on the discord about this which would probably be easier since I am in the dark about this knowing your next question may be "how"? Did not know you were on there so I said hello (just so you know that is me).
Would that be similar to what LORA does? With pivotal tuning?
This is exactly what I am after so I wonder if this could be implemented in the colab?
Thank you.