Closed alanpoulain closed 1 month ago
Hi @alanpoulain! That is correct, there's no support for lifespan events at the moment.
Is it hard to do? I wish I could help you but sadly I think I need a lot of time to understand how it could be done.
I've implemented the ASGI Lifespan protocol in branch asgi-lifespan
. To enable it you need to add the lifespan on
directive to your Caddyfile:
localhost:8090 {
route {
python {
module_asgi "main:app"
lifespan on
}
}
}
You can try it by building from branch:
CGO_ENABLED=1 xcaddy build --with github.com/mliezun/caddy-snake@asgi-lifespan
I have tested it with FastAPI, it would be great if you can confirm that it works as expected!
Thank you very much! I will test it tomorrow 😊
Hello, I tried in Docker Compose, the startup seems to run but twice.
For information, this is the code I use:
@asynccontextmanager
async def lifespan(app: FastAPI):
print("init")
redis_connection = redis.from_url(get_settings().redis_url, encoding="utf8")
await FastAPILimiter.init(redis_connection)
yield
print("exit")
await FastAPILimiter.close()
It works for the shutdown but also twice:
app-1 | 2024/05/21 10:10:06.323 INFO shutting down apps, then terminating {"signal": "SIGTERM"}
app-1 | 2024/05/21 10:10:06.323 WARN exiting; byeee!! 👋 {"signal": "SIGTERM"}
app-1 | 2024/05/21 10:10:06.323 INFO http servers shutting down with eternal grace period
app-1 | 2024/05/21 10:10:06.324 INFO http.handlers.python cleaning up module
app-1 | exit
app-1 | 2024/05/21 10:10:06.326 INFO http.handlers.python cleaning up module
app-1 | exit
app-1 | 2024/05/21 10:10:06.330 INFO admin stopped previous server {"address": "localhost:2019"}
app-1 | 2024/05/21 10:10:06.330 INFO shutdown complete {"signal": "SIGTERM", "exit_code": 0}
I'm not sure if it's related at all with your code, but sometimes requests are hanging indefinitely and the shutdown event is never sent. I need to try without lifespan. Maybe it's related to my code though.
Thanks for sharing the code.
I'll try reproducing on my side and fix it.
According to the Lifespan spec the requests should not be processed until startup lifespan event is received and processed. So it could be related to what you're experiencing
I hope you can reproduce it on your side. Don't hesitate to ask for more information if you need them.
I've tried running it on my side, I don't see the repeated init
and exit
messages:
2024/05/25 18:06:00.611 INFO using provided configuration {"config_file": "Caddyfile", "config_adapter": ""}
2024/05/25 18:06:00.614 INFO admin admin endpoint started {"address": "localhost:2019", "enforce_origin": false, "origins": ["//[::1]:2019", "//127.0.0.1:2019", "//localhost:2019"]}
2024/05/25 18:06:00.615 WARN http.auto_https server is listening only on the HTTP port, so no automatic HTTPS will be applied to this server {"server_name": "srv0", "http_port": 9080}
2024/05/25 18:06:00.615 INFO tls.cache.maintenance started background certificate maintenance {"cache": "0xc000618180"}
init
2024/05/25 18:06:01.313 INFO http.log server running {"name": "srv0", "protocols": ["h1", "h2", "h3"]}
2024/05/25 18:06:01.313 INFO autosaved config (load with --resume flag) {"file": "/Users/mliezun/Library/Application Support/Caddy/autosave.json"}
2024/05/25 18:06:01.313 INFO serving initial configuration
2024/05/25 18:06:01.334 WARN tls storage cleaning happened too recently; skipping for now {"storage": "FileStorage:/Users/mliezun/Library/Application Support/Caddy", "instance": "d2e2a38a-cbda-47b7-bedc-9dfba58ea129", "try_again": "2024/05/26 18:06:01.334", "try_again_in": 86399.99999902}
2024/05/25 18:06:01.335 INFO tls finished cleaning storage units
^C2024/05/25 18:07:04.338 INFO shutting down {"signal": "SIGINT"}
2024/05/25 18:07:04.338 WARN exiting; byeee!! 👋 {"signal": "SIGINT"}
2024/05/25 18:07:04.338 INFO http servers shutting down with eternal grace period
2024/05/25 18:07:04.338 INFO http.handlers.python cleaning up module
exit
2024/05/25 18:07:04.339 INFO admin stopped previous server {"address": "localhost:2019"}
2024/05/25 18:07:04.339 INFO shutdown complete {"signal": "SIGINT", "exit_code": 0}
I think you might be importing the app twice. This is my Caddyfile for reference:
{
http_port 9080
https_port 9443
log {
level info
}
}
localhost:9080 {
route /item/* {
python {
module_asgi "main:app"
lifespan on
venv "./venv"
}
}
route / {
respond 404
}
}
Please share the contents of your Caddyfile
That's weird, this is my Caddyfile:
{
{$CADDY_GLOBAL_OPTIONS}
}
{$CADDY_EXTRA_CONFIG}
{$SERVER_NAME:localhost} {
route {
python {
module_asgi "main:app"
lifespan on
venv "/opt/pysetup/.venv"
}
}
}
OK I understand the issue now! It's because $SERVER_NAME
value is localhost, app:80
in my case. I should have seen that earlier sorry.
Since there are two servers, shouldn't we have two instances of the Python app?
Oh! I see. That's creating two instances of the python app I think. One that listens on app:80 and the other one in localhost.
Now that we've identified what's going on I can make the plugin only send lifetime events once per app. Even if we're using the same app in different places.
That would be great!
I added a cache so now apps are only imported once. This should fix the problem of sending lifespan events multiple times.
Can you confirm on your side that it works as expected?
It works well, thank you very much!
Awesome!
Thank you for your help 🙏
I appreciate the time you put into testing and reporting the errors.
It seems that ASGI Lifespan Protocol is not implemented. It means that no lifespan events (see for instance in FastAPI) can be used for now.