Open kilimchoi opened 3 months ago
@kilimchoi so this should work like this:
/runpod-volume
into the serverless ComfyUI instancecheckpoints
with the model realistic_vision_v5.1.safetensors
{
"input": {
"4": {
"inputs": {
"ckpt_name":"/runpod-volume/checkpoints/realistic_vision_v5.1.safetensors"
},
"class_type":"CheckpointLoaderSimple"
}
}
}
@kilimchoi I'm reopening the issue, as I want to document this also in the README.
It's kind of strange that I can load the custom model by using only the file name (without path). When I tried to use the full path of model file:
"4": {
"inputs": {
"ckpt_name": "/runpod-volume/models/checkpoints/personaStyle_lite.safetensors"
},
"class_type": "CheckpointLoaderSimple"
},
I got 400 bad request error. The logs show this:
Value not in list: ckpt_name: '/runpod-volume/models/checkpoints/personaStyle_lite.safetensors' not in ['personaStyle_lite.safetensors', 'sd_xl_base_1.0.safetensors']
I am using the timpietruskyblibla/runpod-worker-comfy:3.0.0-sdxl
docker.
@dannykok @kilimchoi what I wrote is wrong, I totally forgot about the mapping that we are doing in extra_model_paths.yaml. So you can just specify the name of the model, you don't need to specify the path.
I will make sure to get the README updated to make this clear.
I tried with and without the path but the network volume isn't being recognized by the worker
@albertogb9 you may want to check if the path you used exists in extra_model_paths.yaml.
@albertogb9 you may want to check if the path you used exists in extra_model_paths.yaml.
Already figured it out. I had installed comfyui in the network volume, so the path was "ComfyUI/models/..." whereas it should be just "models/...". So I just created another network volume with a pytorch template and I created the models folders from scratch and loaded the checkpoints. Now it works fine, thanks.
I was wondering how you make an API request to use the custom model from the network volume. It seems that if we use
it just uses whatever model was included in the image. I added the network volume in the endpoint configuration on runpod btw.