Closed merry-hyelyn closed 2 weeks ago
@merry-hyelyn You should install custom node inside model/build.sh. Refer to the commented lines for how I install ComfyUI-Manager.
OK I will try again! Thank you
I have one more question. Can I zip the models used in my custom node locally and upload them directly to S3 without downloading the models from huggingface?
@merry-hyelyn Of course. You can prepare the model artifiact folder in any way by customizing the commands in model/build.sh. You can copy local files into the model folder like ${TARGET_DIR}/checkpoints
or ${TARGET_DIR}/custom_nodes
.
Thanks for the quick reply. I'll try it!!!!
I get the same error. I downloaded the model from civitai and put it in the folder as shown in the image.
I also tried adding custom_nodes
as you suggested.
cd ${TARGET_DIR}/custom_nodes
[[ -e ComfyUI-Manager ]] || git clone https://github.com/ltdrdata/ComfyUI-Manager.git && (cd ComfyUI-Manager && git fetch && git checkout 2.48.6)
[[ -e ComfyUI-Impact-Pack ]] || git clone https://github.com/ltdrdata/ComfyUI-Impact-Pack.git && (cd ComfyUI-Impact-Pack && git fetch)
[[ -e ComfyUI_Comfyroll_CustomNodes ]] || git clone https://github.com/Suzie1/ComfyUI_Comfyroll_CustomNodes.git && (cd ComfyUI_Comfyroll_CustomNodes && git fetch)
[[ -e ComfyUI_essentials ]] || git clone https://github.com/cubiq/ComfyUI_essentials.git && (cd ComfyUI_essentials && git fetch)
cd -
I only ran model/build.sh
locally (MacBook Air M2) and the rest of the process, including building the image, on EC2 (Deep Learning OSS Nvidia Driver AMI GPU PyTorch 2.3.0 / t3.small).
Also, when I ran model/build.sh
locally, I ran it through deploy.sh
, and I only made the functions prepare_env
, configure
, prepare_s3
, build_and_upload_model_artifact
executable in deploy.sh
.
Is there anything wrong with my process?
I checked cloudwatch in detail and found the following logs
Import times for custom nodes:
0.0 seconds: /opt/program/ComfyUI/custom_nodes/websocket_image_save.py
0.0 seconds (IMPORT FAILED): /opt/ml/model/custom_nodes/ComfyUI-Manager
0.0 seconds: /opt/ml/model/custom_nodes/ComfyUI_essentials
15.4 seconds (IMPORT FAILED): /opt/ml/model/custom_nodes/ComfyUI_Comfyroll_CustomNodes
16.8 seconds (IMPORT FAILED): /opt/ml/model/custom_nodes/ComfyUI-Impact-Pack
Follow Local run of ComfyUI GUI to run the ComfyUI (full UI) locally using docker which also mount the same artifact folder. You could see the error messages on screen and tune accordingly.
Hello! Thanks to this, the comfyui backend is now available as an API.
To use custom nodes, I added a few packages to the
Comfyui/custom_nodes/
folder. InDockfile.inference
, underInstall Comfyui
code, I added the following.The deployment was successful, but when I called the API afterwards, I got an error that the custom nodes were not read as shown below.
invalid prompt: {'type': 'invalid_prompt', 'message': 'Cannot execute because node CR LoRA Stack does not exist.', 'details': "Node ID '#5'", 'extra_info': {}}
How do I use custom nodes?