Run inference on Replicate:
Training models:
If you have questions or ideas, please join the #lora
channel in the Replicate Discord.
You can deploy any models at huggingface or ones you trained yourself. You can add LoRA with these models
We have a default SD1.5 deployed at replicate, so you can run your own in a scalable manner. If you would like to launch your own model, run
cog run script/download-weights.py
to download the weights and place them in the cache directory. This will save base model that will get mounted to the cog container.
Either push the model to replicate (follow these instructions for pushing model to replicate) or run
cog predict -i prompt="monkey scuba diving"
to run locally.
First, make a model at replicate.com. Create one here
Specify the following parameter file at deploy_others.sh
file.
export MODEL_ID="lambdalabs/dreambooth-avatar" # change this to model at huggingface or your local repository.
export SAFETY_MODEL_ID="CompVis/stable-diffusion-safety-checker"
export IS_FP16=1
export USERNAME="cloneofsimo" # change this to your replicate ID.
export REPLICATE_MODEL_ID="avatar" #replciate model ID,
Run it with
bash deploy_others.sh