Closed winstxnhdw closed 1 week ago
After diving into the codebase, I think it's as easy as this? Are there any drawbacks from this approach? https://github.com/litestar-org/litestar/blob/main/litestar/routes/http.py#L62-L161
async def handle(self, scope: HTTPScope, receive: Receive, send: Send) -> None: # type: ignore[override]
"""ASGI app that creates a Request from the passed in args, determines which handler function to call and then
handles the call.
Args:
scope: The ASGI connection scope.
receive: The ASGI receive function.
send: The ASGI send function.
Returns:
None
"""
route_handler, parameter_model = self.route_handler_map[scope["method"]]
request: Request[Any, Any, Any] = route_handler.resolve_request_class()(scope=scope, receive=receive, send=send)
if route_handler.resolve_guards():
await route_handler.authorize_connection(connection=request)
- response = await self._get_response_for_request(
+ response, cleanup_group = await self._get_response_for_request(
scope=scope, request=request, route_handler=route_handler, parameter_model=parameter_model
)
await response(scope, receive, send)
+ cleanup_group.cleanup()
if after_response_handler := route_handler.resolve_after_response():
await after_response_handler(request)
if form_data := scope.get("_form", {}):
await self._cleanup_temporary_files(form_data=cast("dict[str, Any]", form_data))
def create_handler_map(self) -> None:
"""Parse the ``router_handlers`` of this route and return a mapping of
http- methods and route handlers.
"""
for route_handler in self.route_handlers:
kwargs_model = route_handler.create_kwargs_model(path_parameters=self.path_parameters)
for http_method in route_handler.http_methods:
if self.route_handler_map.get(http_method):
raise ImproperlyConfiguredException(
f"Handler already registered for path {self.path!r} and http method {http_method}"
)
self.route_handler_map[http_method] = (route_handler, kwargs_model)
async def _get_response_for_request(
self,
scope: Scope,
request: Request[Any, Any, Any],
route_handler: HTTPRouteHandler,
parameter_model: KwargsModel,
- ) -> ASGIApp:
+ ) -> tuple[ASGIApp, DependencyCleanupGroup]
"""Return a response for the request.
If caching is enabled and a response exist in the cache, the cached response will be returned.
If caching is enabled and a response does not exist in the cache, the newly created
response will be cached.
Args:
scope: The Request's scope
request: The Request instance
route_handler: The HTTPRouteHandler instance
parameter_model: The Handler's KwargsModel
Returns:
An instance of Response or a compatible ASGIApp or a subclass of it
"""
if route_handler.cache and (
response := await self._get_cached_response(request=request, route_handler=route_handler)
):
return response
return await self._call_handler_function(
scope=scope, request=request, parameter_model=parameter_model, route_handler=route_handler
)
async def _call_handler_function(
self, scope: Scope, request: Request, parameter_model: KwargsModel, route_handler: HTTPRouteHandler
- ) -> ASGIApp:
+ ) -> tuple[ASGIApp, DependencyCleanupGroup]
"""Call the before request handlers, retrieve any data required for the route handler, and call the route
handler's ``to_response`` method.
This is wrapped in a try except block - and if an exception is raised,
it tries to pass it to an appropriate exception handler - if defined.
"""
response_data: Any = None
cleanup_group: DependencyCleanupGroup | None = None
if before_request_handler := route_handler.resolve_before_request():
response_data = await before_request_handler(request)
if not response_data:
response_data, cleanup_group = await self._get_response_data(
route_handler=route_handler, parameter_model=parameter_model, request=request
)
response: ASGIApp = await route_handler.to_response(app=scope["app"], data=response_data, request=request)
- if cleanup_group:
- await cleanup_group.cleanup()
- return response
+ return response, cleanup_group
Are there any drawbacks from this approach?
Yes there are. Mainly that it's a breaking change and would deviate from the behaviour as described in the documentation. It also does not solve your problem. While you may be able to use the dependency before cleanup inside your generator, it will definitely not be available after you've hit the first yield
. This behaviour would be even more unexpected and harder to understand.
Fundamentally, I think it's not a good idea to encourage this in the first place, since it's not so much an issue with how Litestar's dependencies work, but rather simply how Python generators work. Take this code for example:
def some_generator_func(client):
while True:
yield client.get_something()
def make_generator():
with Client() as client:
return some_genereator_func(client)
You wouldn't expect this to work in regular Python code, and instead, you'd do this:
def some_generator_func(client):
with Client() as client:
while True:
yield client.get_something()
The same is true for Litestar dependencies and generators. The dependencies you provide cannot know if they're being used inside a generator, and even if they could, it would be quite unexpected to have them behave differently in that case.
The only issue here is that it's currently not possible to manually request dependencies (with a cleanup scope). This is planned for the DI overhaul in 3.0 though, and with that feature, you'd then be able to do something like:
@post(path='/test')
async def test(app: Litestar) -> ServerSentEvent:
async def gen():
async with app.provide("client", ClientProtocol) as client:
for i in range(10):
await client.get_by_id([f'{i}']) # silent exception is thrown
yield ServerSentEventMessage(data=f'{i}', event='test')
await sleep(1)
return ServerSentEvent(gen())
(API is entirely hypothetical :))
Summary
Currently, according to the Litestar docs,
This means that if you are using your dependency within your generator (maybe you have some notification/progress feature), the dependency is cleaned up before your generator can even begin.
Basic Example
Drawbacks and Impact
Will need to write a special case for streams
Unresolved questions
No response