Closed T8mars closed 3 months ago
I'm going to look into this tomorrow, thanks for flagging
thank you for you relpy input_1.zip it's the test workflow,I use the 'input1' picture and change the size to 512*512,it run error,but I try to keey same size and parameters ,it run sucessful,so I don't why it happen
@T8mars, how much RAM do you have? I have a feeling this may be a RAM issue if you have somewhere around 12-16GB and are running on high-detail mode
@peteromallet I have 16GB of graphics memory and 64GB of memory, and everyone in the group has this problem. Sometimes it's good, but then it reports an error. I need to restart it to work properly, and even changing a parameter can cause this problem. I'm not alone
I'm in a ai group,someone recommend this node to us.and we all encounter this problem. I send one picture to you,you can see that,
In group ,a 24GB graphics memory user also encounter this problem too,I think that the ram is not key casuing this error
我喜欢这个项目,感谢您的节点,但是当我尝试运行它时,高效的k采样器显示错误,我尝试对K采样它显示错误二。而且我在一个Ai组,当我问其他人我发现他们有相同的错误二时,请检查并修复它,我们需要你的帮助
![]()
![]()
哈哈,我也是这个报错,这里还有我,哈哈
hi ,you can see the linjian-ufo have same problem.And I scan the closed issue ,I find other have same problem too.
Hi @T8mars @T8star1984, @linjian-ufo, a potential fix has been implemented into one of the nodes this uses - can you update the Advanced Controlnet node and test and let me know if it re-occurs?
okay ,I will test it and report to you as soon as possible
I tried it just now,it show new error,It seemly say the cuda and pytorch is im compatible.but I checkt it and the version is suit and it run well in other workflow
my torch is 2.2.1 cuda is 12.1 xformers is 0.0.25
When I use Torch 2.12 ,cuda 12.1 and xformers 0.0.25,the workflow works normally,but I need try again
mybe the node don't support new torch ,cuda ,xfomers version
it has new problem ,4 picture cost 3 hours!the gpu usage rate is always 99%,I ask other users,they use new one the gpu useage overflow.
Prompt executed in 12519.21 seconds。last version only cost a few minutes
Hi @T8star1984,
If you scroll up, do you notice a sign saying it's going into low VRAM mode? I think that's what's happening - you're coming up to your RAM limits and it switches to low VRAM mode - would suggest you turn off high detail mode
yes,I use the Shared graphics memory mode because of another one run the new version and graphics memory overflow.but last version have no this problem.let me try high VRAM mode and tell you result again
it's quicker than last time and don't report error any more
Today I generate a new one,it spend an afternoon time and untile now it had been finish.And no any tips,Can you optimize it?The console is stationary.and gpu usage is not high.I feel some problem happening.
Finally, the problem was solved. I downloaded the latest "musetalk" node, which automatically mounts the model when starting COMFYUI, requiring 8GB of graphics memory directly, resulting in gpu overflow and running lag issues. Thank you for your reply in the past few days. I have resolved all the issues. Thank you for your node, it is very useful
Finally, the problem was solved. I downloaded the latest "musetalk" node, which automatically mounts the model when starting COMFYUI, requiring 8GB of graphics memory directly, resulting in gpu overflow and running lag issues. Thank you for your reply in the past few days. I have resolved all the issues. Thank you for your node, it is very useful
That's great to hear and thank you so much for your compliment!
I like this project,thanks for your node.but when I try to run it, the efficient k-sampler show error,and I try to anthor K-sample it show error two. And I am in a Ai group,when I ask others that I find they have same error two,please check and fix it,We need your help
![b3](https://github.com/banodoco/Steerable-Motion/assets/96167788/786604f1-bb90-4ea2-85e7-eb066b2f8354)