Open tomzorz opened 1 year ago
I have since discovered that this is probably happening somewhere due to threads/locks/lifecycle being weird on lambda. On cloudwatch I've added extensive (print) logging, and seen that it looks like my module where I set up seqlog seems to be imported multiple times, logging itself started up and shut down 2-3 times in succession.
As a workaround, I have managed to create a custom Consumer and Queue class implementation, that just straight up acts as a callback and immediately flushes any logs without having a consumer thread/queue acting as a middleman. Obviously not ideal, but at least it's a start.
(Also more context: we're using awslambdaric -> uvicorn -> mangum -> fastapi)
Thanks - that’s good to know; I’ll have a think about whether we could provide a simplified (synchronous) implementation for such scenarios 🙂
Description
I'm trying to get seqlog working for our FastAPI app. I've written a middleware that takes requests and logs their details using seqlog - this works perfectly locally, or locally in docker. But it's also broken on AWS Lambda :(
What I Did
When I call
seqlog.log_to_seq
I save that instance aslog_handler
so I see what's going on inside. Whenever I log anything, thelog_handler.consumer.current_batch_size
correctly shows the batch size increasing in my local test environment. But on lambda, this value doesn't seem to increase. The only way I've managed to get some logs out of the system, is to repeatedly spam one fastAPI endpoint, and around the 30-50th try, suddenly logs start to show up and everything works perfectly.I've tried:
flush
but this doesn't seem to help as the logs are not being queuedI'm happy to try anything - sadly on lambda there's no way to debug, and each deployment is like 5-7 minutes, plus I can only print things to aws cloudwatch to see what happened.