Closed pangxiaobin closed 6 months ago
Because your implementation shuts down the scheduler as soon as async_scheduler_start()
exits (when the context manager block is exited).
What's the problem with my solution? Why did you have to make your own?
await async_scheduler_shutdown()
I am currently studying relevant content and attempting to implement alternative solutions. In the FastAPI documentation, it is mentioned that setting the lifespan will start running the code after the program is launched and will only execute the code after the 'yield' statement when the program is ending. I implemented it this way to manually shut down the scheduler worker when the program ends.
On the other hand, if multiple workers are specified using uvicorn, will there be multiple scheduler workers running simultaneously?
I am currently studying relevant content and attempting to implement alternative solutions. In the FastAPI documentation, it is mentioned that setting the lifespan will start running the code after the program is launched and will only execute the code after the 'yield' statement when the program is ending. I implemented it this way to manually shut down the scheduler worker when the program ends.
I'm not sure what you're talking about. There is no yield
statement anywhere in the asgi_fastapi.py
example.
On the other hand, if multiple workers are specified using uvicorn, will there be multiple scheduler workers running simultaneously?
Yes, but that's not a problem on APScheduler 4, as it supports multiple concurrently running scheduler instances.
Thanks a lot. I'll use the asgi_fastapi demo you provided.
@pangxiaobin i have similar issue and here is my solution.
# -*- coding: utf-8 -*-
import threading
from contextlib import asynccontextmanager
from datetime import datetime
from apscheduler import AsyncScheduler
from apscheduler.datastores.memory import MemoryDataStore
from apscheduler.eventbrokers.local import LocalEventBroker
from apscheduler.triggers.interval import IntervalTrigger
from fastapi import FastAPI
scheduler: AsyncScheduler
data_store = MemoryDataStore()
event_broker = LocalEventBroker()
def tick():
print(f"threading {threading.get_ident()} Hello, the time is", datetime.now())
async def add_task():
try:
await scheduler.add_schedule(tick, IntervalTrigger(seconds=1), id="tick")
except NameError as e:
raise RuntimeError("Not init.") from e
@asynccontextmanager
async def lifespan(application: FastAPI): # noqa: F841
global scheduler
async with AsyncScheduler(data_store, event_broker) as scheduler:
await scheduler.start_in_background()
await add_task() # just for test
yield
await scheduler.stop()
await scheduler.wait_until_stopped()
app = FastAPI(lifespan=lifespan)
or wrap it into an extra lifespan ( i prefer this )
# -*- coding: utf-8 -*-
import threading
from contextlib import AsyncExitStack, asynccontextmanager
from datetime import datetime
from typing import TYPE_CHECKING
from apscheduler import AsyncScheduler
from apscheduler.datastores.memory import MemoryDataStore
from apscheduler.eventbrokers.local import LocalEventBroker
from apscheduler.triggers.interval import IntervalTrigger
from fastapi import FastAPI
if TYPE_CHECKING:
from apscheduler.abc import DataStore, EventBroker
scheduler: AsyncScheduler
def tick():
print(f"threading {threading.get_ident()} Hello, the time is", datetime.now())
async def add_task():
try:
await scheduler.add_schedule(tick, IntervalTrigger(seconds=1), id="tick")
except NameError as e:
raise RuntimeError("Not init.") from e
@asynccontextmanager
async def scheduler_lifespan(data_store: 'DataStore', event_broker: 'EventBroker'):
global scheduler
async with AsyncScheduler(data_store, event_broker) as scheduler:
await scheduler.start_in_background()
await add_task() # just for test
yield
await scheduler.stop()
await scheduler.wait_until_stopped()
@asynccontextmanager
async def lifespan(application: FastAPI): # noqa: F841
async with AsyncExitStack() as stack:
await stack.enter_async_context(
scheduler_lifespan(
data_store=MemoryDataStore(),
event_broker=LocalEventBroker(),
)
)
yield
app = FastAPI(lifespan=lifespan)
hope it helps you
I'm doing something very similar with Litestar's lifespan context also @unights. Is there a reason that you explicitly await scheduler.stop()
and await scheduler.wait_until_stopped()
inside the scheduler context, given that the context itself awaits stop()
on exit? In other words, is there difference between your example and this?
@asynccontextmanager
async def scheduler_lifespan(data_store: 'DataStore', event_broker: 'EventBroker'):
global scheduler
async with AsyncScheduler(data_store, event_broker) as scheduler:
await scheduler.start_in_background()
await add_task() # just for test
yield
await scheduler.wait_until_stopped()
FWICT, yours will await the scheduler reaching a stopped status before things registered on the schedulers internal exit stack have been exited and I'm curious if this is deliberate?
I'm doing something very similar with Litestar's lifespan context also @unights. Is there a reason that you explicitly
await scheduler.stop()
andawait scheduler.wait_until_stopped()
inside the scheduler context, given that the context itself awaitsstop()
on exit? In other words, is there difference between your example and this?@asynccontextmanager async def scheduler_lifespan(data_store: 'DataStore', event_broker: 'EventBroker'): global scheduler async with AsyncScheduler(data_store, event_broker) as scheduler: await scheduler.start_in_background() await add_task() # just for test yield await scheduler.wait_until_stopped()
FWICT, yours will await the scheduler reaching a stopped status before things registered on the schedulers internal exit stack have been exited and I'm curious if this is deliberate?
No difference, it's just my personal habits. :smile:
OK great, thanks for responding!
I'm doing something very similar with Litestar's lifespan context also @unights. Is there a reason that you explicitly
await scheduler.stop()
andawait scheduler.wait_until_stopped()
inside the scheduler context, given that the context itself awaitsstop()
on exit? In other words, is there difference between your example and this?@asynccontextmanager async def scheduler_lifespan(data_store: 'DataStore', event_broker: 'EventBroker'): global scheduler async with AsyncScheduler(data_store, event_broker) as scheduler: await scheduler.start_in_background() await add_task() # just for test yield await scheduler.wait_until_stopped()
FWICT, yours will await the scheduler reaching a stopped status before things registered on the schedulers internal exit stack have been exited and I'm curious if this is deliberate?
Peter, I'm trying at this moment to implement apscheduler in Litestar app, could you, please share the way how you initialised APScheduler. When I'm starting it as run_until_stopped it runs well asp scheduler works well but endpoints aren't wornik, but as start_in_background it doesn't trigger any tasks but endpoints work well.
Things to check first
[X] I have checked that my issue does not already have a solution in the FAQ
[X] I have searched the existing issues and didn't find my bug already reported there
[X] I have checked that my bug is still present in the latest release
Version
APScheduler==4.0.0a4
What happened?
Use Apscheduler in fastapi lifespan does not work!
How can we reproduce the bug?
Use docker start postgresql
dependency
example
log
I've enabled start_in_background, but I haven't seen the output of the trick print.
Why can't my implementation, similar to the fasgi_fastapi.py example you provided, run successfully? Do you have any suggestions?