Following #6
After I disabled all other custom nodes in ComfyUI the extremely long delay in request.post() was gone. And the performance became acceptable with 1.x seconds for simple get & send.
Weirdly enough, the delay was originated in a stdlib...
email/utils.py:280 => new_params.append((name, '"%s"' % quote(value)))
The operation was formatting a string + putting string into tuple + appending the tuple into a list...
There's no multi-threading, it is all on main thread. It's gone when I disabled other custom nodes. I don't know which one is conflicting with it, and I am too lazy to go find it. So until someone else encounters the same problem, I think I am going leave it alone.
await request.post() SIO with Long Delay: 15s
Putting that long delay aside. The post still took about 1 seconds without the long delay.
await request.post() SIO without Long Delay: 1s
await request.post() aiohttp Websocket: 7ms
The difference is large but I think this performance is no major impact to the user experience for now.
If this is acceptable, I think we can close this until further actions need to be done.
Following #6 After I disabled all other custom nodes in ComfyUI the extremely long delay in request.post() was gone. And the performance became acceptable with 1.x seconds for simple get & send. Weirdly enough, the delay was originated in a stdlib... email/utils.py:280 => new_params.append((name, '"%s"' % quote(value))) The operation was formatting a string + putting string into tuple + appending the tuple into a list... There's no multi-threading, it is all on main thread. It's gone when I disabled other custom nodes. I don't know which one is conflicting with it, and I am too lazy to go find it. So until someone else encounters the same problem, I think I am going leave it alone. await request.post() SIO with Long Delay: 15s
Putting that long delay aside. The post still took about 1 seconds without the long delay.
await request.post() SIO without Long Delay: 1s
await request.post() aiohttp Websocket: 7ms
The difference is large but I think this performance is no major impact to the user experience for now. If this is acceptable, I think we can close this until further actions need to be done.