Deploying a custom model works, but for sites with low traffic the cold-boot can be painfully long.
I'm curious if we could add an optional parameter that'd point to a lora stored on huggingface (or civit.ai) that we can load and un-load after the generation.
I'm happy to contribute if you give me a thumbs up.
Deploying a custom model works, but for sites with low traffic the cold-boot can be painfully long.
I'm curious if we could add an optional parameter that'd point to a lora stored on huggingface (or civit.ai) that we can load and un-load after the generation.
I'm happy to contribute if you give me a thumbs up.