aio-libs / aiohttp

Asynchronous HTTP client/server framework for asyncio and Python
https://docs.aiohttp.org
Other
15.12k stars 2.02k forks source link

Nondeterministic server termination using ProcessPoolExecutor #4076

Closed isp1356 closed 2 months ago

isp1356 commented 5 years ago

Long story short

When using a ProcessPoolExecutor from the concurrent python module, the way a SIGTERM is handled does not seem to be deterministic. I'm trying to register a shutdown callback via aiohttp.web.Application.on_shutdown: sometimes, this callback is called and sometimes not.

Expected behaviour

When sending a SIGTERM, the on_shutdown callback in the script below should be always executed.

Actual behaviour

When sending a SIGTERM, the on_shutdown callback in the script below is not always executed.

Steps to reproduce

import asyncio
import concurrent
import os
from time import sleep
import aiohttp.web

print("server pid:", os.getpid())

def sighandler(signum, frame):
    print("signal handler called with signal", signum)

async def on_shutdown(app):
    print('shutdown callback........')

def blocking():
    print("worker pid:", os.getpid())
    sleep(10000000)

async def test(loop, app):
    app.on_shutdown.append(on_shutdown)
    with concurrent.futures.ProcessPoolExecutor(max_workers=1) as pool:
        await loop.run_in_executor(pool, blocking)

app = aiohttp.web.Application()
loop = asyncio.get_event_loop()
loop.create_task(test(loop, app))

aiohttp.web.run_app(app)
server pid: 387406
======== Running on http://0.0.0.0:8080 ========
(Press CTRL+C to quit)
worker pid: 387408

kill -15 387406 sometimes triggers on_shutdown and sometimes doesn't.

Your environment

aiohttp server: 3.6.0 python: 3.6.9 os: archlinux(5.2.13-arch1-1-ARCH)

asvetlov commented 5 years ago

Thanks for the report. I'm overwhelmed by other tasks and have no idea when I can find time for debugging the issue.

Help is needed for investigating what's going on with multiprocesses.

Dreamsorcerer commented 2 months ago

I don't think running like that (using the low-level asyncio APIs) is something that can work. run_app() expects to be the entry point of your application, so I think your example would make more sense and will be a lot more reliable if you do something more like:

async def test(app):
    with concurrent.futures.ProcessPoolExecutor(max_workers=1) as pool:
        t = asyncio.get_running_loop().run_in_executor(pool, blocking)
        yield

app = aiohttp.web.Application()
app.on_shutdown.append(on_shutdown)
app.cleanup_ctx.append(test)

if __name__ == "__main__":
    aiohttp.web.run_app(app)