Open afecelis opened 6 months ago
This would be a nice addition! I haven't personally used ComfyUI, but it looks great. I don't have the bandwidth to do this right now, but I'd love any collaboration!
(Each backend has its own API file and it should be pretty straightforward for a developer to add a new one for Comfy).
There's a ComfyUI addon that allows you to use nodes in blender - running it right through blender - very handy, works well. There could be useful code there.
Thanks for that info, @Vidyut!
I added ComfyUI support for a project I made in the latest months. You can check my fork here: https://github.com/RobeSantoro/AI-Render-ComfyUI-Support
I followed @benrugg's implementation for the most part, but my code is still messy and needs some cleanup.
ComfyUI is different from the other backends that the add-on supports, so I decided to create separate files and changed the approach during development. The result now mixes two kinds of methods to interact with the ComfyUI API.
I don't want to bring the nodes from ComfyUI to a node editor in Blender for now. I find ComfyUI pretty lovely to work with. My idea is to bring a set of selected nodes to Blender UI.
I'm planning a full rewrite of the ComfyUI support in a new fork. If someone is interested, let's get in touch.
@RobeSantoro thanks for sharing this. A quick glance at your fork shows that you've done a lot of work! If you end up having time to clean it up, feel free to submit a PR!
@benrugg Of course, I want to clean it up! How do I proceed with the refactoring of my code? Do you want me to incorporate the Comfy stuff into your files, or can I keep them separate like they are now?
E.g. I created ./properties_comfy.py, operators_comfyui.py, ./ui/ui_panels_comfyui.py. Would you like me to move the code to the relative add-on files/folder?
@RobeSantoro ah, interesting - I actually haven't had a chance to look at your code in more detail or run it. Is Comfy that different that there are unique needs for the UI panels? I'll leave it up to you how to best organize it to keep it modular and maintainable. I was mostly going off the fact that you mentioned it needed to be cleaned up 😄
Hey @benrugg !
Here's a video recorded in May: https://www.patreon.com/posts/104721510
With ComfyUI, you can create stable diffusion pipelines and save it as a JSON for API usage.
My initial idea was to stick to your implementation of the other backends. While working on it, I decided to change the approach and create a ComfyUI Panel that auto-populates if a specific set of nodes is found in the JSON.
So now, in the code, there are some cases in which I'm using the initial approach, and I would like to refactor those.
I'll keep separate files for convenience; if you create a ComfyUI Support branch in your repo, I could make a pull request on that branch so that your main stays clean.
What do you think?
@RobeSantoro wow, I didn't realize how different the support for ComfyUI was going to be. I've made a new branch comfyui-support
. Feel free to submit a pull request there.
I will do my best to dive deeper into everything you've done as soon as I can. If possible, I will do that this weekend.
Hey @benrugg! I’ve been busy this week.
Next week, I'll record a new video with a code walkthrough and a demo of the ComfyUI support achieved so far.
Hi @benrugg !
Sorry for the delay... I've been stuck on another project.
I made a PR for this repo's comfyui-support branch if anybody wants to check.
@RobeSantoro great - I will check this all out on Sunday. Thanks for submitting it! It's cool that you put so much work into it.
ComfyUI is now supported on the comfyui-support branch.
@RobeSantoro as this moves forward more, I can take your readme changes and put them in the WIKI
I'm so excited with this new development! Thanks to everyone involved! I'm getting the following error, does anyone know how to fix it? (screen attached)
Ps. I think I was able to fix the ckpt issue by modifying the example_api template json, but now I'm getting this error:
invalid prompt: {'type': 'invalid_prompt', 'message': 'Cannot execute because node AlphaChanelAddByMask does not exist.', 'details': "Node ID '#55'", 'extra_info': {}}
@afecelis Hey Alvaro, thanks for reporting!
I'm creating a short tutorial to make the branch work. To solve the specific issue you're having now, you can just open comfy in the browser on localhost http://127.0.0.1:8188/ and install the missing nodes via the Manager.
That specific node is used to apply the alpha of the actual render to the generated image. https://comfy.icu/node/AlphaChanelAddByMask I'm also open to ideas if somebody knows how to do this natively.
Let me know if you get it to work!
Thank you so much @RobeSantoro , I installed the Allor nodes and it solved the alpha Issue, now I'm getting the following error: `got prompt Failed to validate prompt for output 9:
Ps. I just found the Lora in civitai: https://civitai.com/models/211960/robotic-furries
I'm going to try again, how do you disable a Lora in blender's Lora interface? Setting its values to zero?
Ps2. How do you use the 3 workflow templates? I'm using the example_api.json file. Are we to modify their paths for things to work properly? i.e.: "lora_name": "SD15\AS style_Mechanical disc.safetensors",
BTW. this lora can be found here: https://civitai.com/models/138157?modelVersionId=152605
OK, so I made some progress, With everything in place and the basic template's paths to the files modified everything is showing up OK in Blender's interface when you refresh each section. I rendered my scene again and got no errors this time, but the output was a bit weird, as if inside a hexagon:
@afecelis Eh, that hexagon seems to be the default cube! Something needs to be fixed at the Compositor stage, I think. Check also the ComfyUI input folder. You should see the temp passes folders:
The example_api.json
is just a workflow example. You can open the non API version which is located outside of the ./workflow_api folder
.
![Uploading Screenshot 2024-09-12 at 22.10.58.png…]()
Bypass the nodes of the Alpha trick I'm using if you want, and have a look: Take your time to understand the dataflow and read the notes.
You can modify it as you need and create any workflow you want. You don't need to download the ckpt models or the Lora models I'm testing.
Have a LoadImage node for the input image titled "color". This image is ignored if the denoise value of the KSampler is 1.
Have a SaveImage node for the output image with the title "output_image"
If you want to drive the generation better, you can use controlnet models with the same logic: PS: The lineart is created from the "color" image.
Of course, you need a KSampler node "main_sampler", but you can create any pipeline you need. At the moment, the list of supported node classes that will appear in Blender UI are the following:
"CheckpointLoaderSimple",
"KSampler",
"LoraLoader",
"ControlNetApplyAdvanced",
"ACN_AdvancedControlNetApply",
"SelfAttentionGuidance",
"UpscaleModelLoader",
"CLIPSetLastLayer"
If you need any other node inside Blender, just ask me.
To solve your issue, make sure to:
Use Blender >= 4.2.0
Check the ComfyUI Setup panel
Leave the compositor open during the render and check what you're sending to comfy and where. .
Check the mist pass: For the depth controlnet image, I'm sending the Mist pass, so enable the Mist Viewport Display for the active camera and set it up to match the clipping points of the scene. You can also clamp it in the compositor with the color ramp node; using the Viewer Node, you can preview the passes in the Image Editor
Leave the terminal where comfy is running and the Blender system console open to check what's happening during render, report any errors, etc.
Enable Keep User Interface in the Temp Editors:
I'm recording a comprehensive tutorial, but it takes much time. Later, I'll write a chapter for the wiki.
I hope you'll be able to solve your issue. Let me know
@RobeSantoro Whoah! thanks for all that info! The addon is implementting more than I had imagied! amazing! This is a dream come true, I mean, Blender+comfyui. I'm in fact getting the temp passes created in Comfy's input folder, but I'll start going slowly through each of the stpes you're showing me. I'll let you know how things go, looking forward to that tutorial, if you need any help (i.e. video editing, etc) or if there's anyway I can help you out please let me know. Grazie tante! ciao!
Ps. I just ran into your may post video: https://www.patreon.com/posts/104721510
Checking it out right now
@afecelis
Ps. I just ran into your may post video: https://www.patreon.com/posts/104721510
That video needs to be updated! It shows how to add nodes to the add-on. If you or anyone else has experience with Python and the Blender Python API, please have a look at the code if it smells.
I am now setting up a clean Win11 system to test and record a video.
@benrugg
I created a PR to merge with the respective comfyui-support branch on your repo, so you can check it. RobeSantoro:comfyui-support
Finally, the comfyui-support branch is ready to merge with the main.
Do you want me to issue a PR for that, or do you want to merge it by yourself?
Cheers
@RobeSantoro awesome, I just approved that PR. I will check this all out finally - and install Comfy finally! - as soon as I can and then get back to you if I see any changes that need to be made.
I really appreciate all your work on this!
@benrugg
To install ComfyUI, follow my instructions in the README_comfy.md
https://github.com/benrugg/AI-Render/blob/comfyui-support/README_comfy.md#installation-of-comfyui
On my machines, I found that is better to have Python 3.11 on system and install a 3.11 conda env on top of that.
I worked with the addon and the comfyui backend extensively on Windows, but I also did some test on macos.
I’ll implement a proper websocket connection and image upload during my next project. So it will be possible to use a remote Comfy server just by changing the address.
Thanks to you for sharing your repo. Learned a lot from this code base.
Great - thank you for calling that all out. I work on a Mac all day, but keep a PC for testing, too, so hopefully we'll cover all the bases 😄
Hey @RobeSantoro, I looked through this in detail today (finally!). I'm really impressed with all the work you've put into this. It's such a powerful set of features! 💯
I'd like to move forward with getting this merged in and released in Github, Blender Market and Gumroad.
Because so many of my users are not coders, I tried to make the UI as intuitive and simple as possible. (Not sure I totally hit that mark, haha, because Blender is so restrictive 😅). With that in mind, though, I'd like to remove the backend preferences from the Setup panel, just to keep it simpler.
Here are my notes as I just tested everything:
ui_panels.py
, remove the two additions:
Automatic1111
and ComfyUI
to their respective defaults)NameError: name 'os' is not defined
. There might be a cross platform way to do this. (I honestly hate Python's bad compatibility across platforms!)comfyui_api.py
(or you might not need them - feel free to test without them and see if any errors come up):
supports_choosing_upscaler_model()
supports_choosing_upscale_factor()
fixed_upscale_factor()
(1, 2, 0)
in __init__.py
Feel free to make a PR to merge the comfyui-support branch to main. You'll need to merge my recent changes or rebase.
After we get this merged, I'll probably add the Comfy readme instructions to one of the wiki pages.
And when it's all merged in, I'll create the release and update Blender Market and Gumroad.
Let me know if you want to discuss anything!
Hi @benrugg! I want to see my contribution merged into the main and totally agree with you about your considerations.
Of course, it's a bad practice to show Preferences in the UI Panels. I did that and the Quick switch to Comfy only for development purposes, I'll remove them as I prepare the PR. The same goes for the allowance for empty prompts.
I'll check the OS issue on macOS. It's just a little automation to create folders for animations I used during production to quickly test generation parameters. If I cannot find a solution I will remove it too.
I'll check for the upscaler definitions needs and bump up the version.
Thanks for taking the time to review the branch. I'll PR asap
Sounds great! Thank you for taking the time to make this all happen.
Describe the feature you'd like to see:
Hi Ben, It would be great if besides connecting to A1111 locally, blender could also connect to a running instance of ComfyUI. I've tried using its localhost address, but it cant connect tioit via API. Also, I don't know how much Comfy's node structure would impact what you have already developed around A1111.
Additional information
No response