Closed dehort closed 4 years ago
Can you give steps to reproduce this? I'm not seeing it in my own testing.
@ghjm I am using the receptor_http plugin to hit a web server that sleeps for 10 seconds on each request that it receives. I am using the receptor-controller to send multiple (10 for example) work requests down to the receptor node. The work requests simply hit the web server and block for 10 seconds. You can see that only one request is dispatched at a time. The spin loop will be triggered and the cpu usage will climb.
This is the simple sleepy web server that I'm using:
` import asyncio from aiohttp import web
async def hello(request): print("got a request..") await asyncio.sleep(10) print("awake returning...") return web.Response(text="Hello, world")
app = web.Application() app.add_routes([web.get('/', hello)])
web.run_app(app, port="9000") `
Thanks, @dehort - and we're confident this isn't the Controller only dispatching one at a time? It can be reproduced with receptor send
from commandline?
@j00bar I'm pretty confident that this is not the controller. i can see where all the messages are sent to the receptor node at roughly the same time. It looks like receptor reads the messages off of the connection and adds them to the queue. They sit in the queue while the work request is processed.
This is what I'm using to test:
for a in {1..10}; do receptor send --directive receptor_http:execute foo '{"url": "http://localhost:9000", "method": "GET"}' & done
I see requests being executed in parallel, on both the devel
and release_0.5
branches. I have not yet been able to observe the serialization behavior you're describing.
Closing for #167
Even though the WorkManager has a thread pool, it is not dispatching work requests in parallel. The WorkManager.handle() method does not hand control back over to the event loop while checking to see if the worker thread has put something onto the response_queue. This leads to spin loop in the WorkManger.handle() method which leads to 100% cpu usage.