Closed Saintroi closed 10 months ago
Update: I wrote this without realizing that v0.4.1 had been released. I'm working on updating and deploying that version to see if any of the above has been resolved.
hey, thanks for reporting these issues.
About the traceback styling in dark mode and the wrong "no pipelines" message, I fixed them in v0.4.1
About the logging, there were several bugs, especially when using sync tasks (that non-async functions) and I fixed those in 0.4.0 so if your were using v0.3.* that makes sense you faced those issues. Hope you won't get other bugs on the logging 🥶
So if I get correctly, your end goal is to capture logs in plombery that are not generated with the plombery logger... and yes that is a limitation at the moment but I'd like to find a solution as it's a pretty common situation I think, I'll look into it!
@lucafaggianelli thanks for getting back to me on this!
I can confirm the traceback info box is great now, and the no pipelines message on the front page is gone!
I was using 0.4.0 in the initial post (hence the dark mode) and think the issues I listed are still present in 0.4.1 based on the testing I've done.
The function I'm running is synchronous, so perhaps that does play into the problems I'm experiencing here. I may mess around with running it async, but I cannot make too many changes to the actual process that's being ran here.
The author of the code I'm running included logging, but only using print statements. I think it would be cool if plombery captured any stdout within a task and piped that to the logger somehow.
In the meantime, I've just gone through and added a logger.info call wherever he had print statements, although some modification had to be done since print works differently. The tricky part was getting the logger passed down to his module.
logging.getLogger() was the solution, but in order to get the correct logger I needed the name. Currently I'm just looking at all of the loggers in the Logger dictionary and getting the newest one with the name of my task. Then I pass that name down and call getLogger(plomLoggerName) inside of his module so I can use it.
I think adding a getName function on your custom logging instance would be nice and make that a bit easier.
I'm interested to hear your thoughts around the async / sync stuff since you've been trying to address that. I'm gonna mess around with it on my end a bit and see what happens. If nothing helps then I'm likely going to be opening up another issue momentarily as I'm having issues with this job not running properly on the schedule, but a manual run seems to work fine.
Based on this line in the error: /opt/python/3.11.4/lib/python3.11/logging/handlers.py:1500: RuntimeWarning: coroutine 'WebSocket.send_json' was never awaited
and the traceback, I did some looking around in the plombery source code.
You're probably aware of this, given that you built it, but it seems this is related to this old issue on the FastAPI side: https://github.com/tiangolo/fastapi/issues/98
Based on the code in the files below and reading through that issue (and others) I think you've done this correctly, just not sure why issues are still occurring - other than it's probably something to do with the fact that I'm trying to run synchronous code, which might be blocking some of this from running periodically.
I mostly wanted to post this comment as a summary of what I think is happening, but also to provide some context on my deployment. I am developing on Windows but deploying to a Linux Web App on Azure.
Startup Command: gunicorn -w 4 -k uvicorn.workers.UvicornWorker app:app
Folder structure:
Hey thanks for the detailed debug, I'll take a look asap.
Update: the creator of the script I'm running made some changes to fix some bugs in his logic, and I altered my startup command to fix another issue that I opened and closed here, and it seems that as a result of those actions, this issue is also gone. I am no longer seeing these messages in the logs.
New startup command: gunicorn --threads 4 -k uvicorn.workers.UvicornWorker app:app
Did you try to run uvicorn directly, without passing from gunicorn? I'm wondering if the WSGI nature of gunicorn, even though you're using the UvicornWorker, may lead to your issues.
Anyway happy that you solved the problem! In general, sync tasks should work in plombery, but apparently there is still something which I didn't test properly, I'll try to improve the sync testing!
PS: I'm closing this issue as you solved it, though if you need, you can reopen it
I didn't mess around with using uvicorn in production mostly because the Azure Web App service documentation on this was centered on gunicorn and I'm not really familiar with either of them personally.
That said, I did use uvicorn while testing locally and experienced the same issues, it just seemed to be less frequent. And none of this was happening before I implemented the logging stuff, so I think perhaps there's something weird with sync tasks and logging? Who knows. If anyone else experiences the same thing, feel free to @ me and we can compare and see if we can find the disconnect.
All good for now though, thank you!
@lucafaggianelli I'm getting the same issue since I started using get_logger()
:
Traceback (most recent call last):
File "/usr/local/Cellar/python@3.11/3.11.6/Frameworks/Python.framework/Versions/3.11/lib/python3.11/logging/handlers.py", line 1498, in emit
self.enqueue(self.prepare(record))
File "/Users/morpheus/Desktop/bazarr_provider/venv/lib/python3.11/site-packages/plombery/logger/web_socket_handler.py", line 13, in enqueue
manager.emit(f"logs.{self.run_id}", record.message)
File "/Users/morpheus/Desktop/bazarr_provider/venv/lib/python3.11/site-packages/plombery/websocket/__init__.py", line 58, in emit
run_all_coroutines(all_coroutines)
File "/Users/morpheus/Desktop/bazarr_provider/venv/lib/python3.11/site-packages/plombery/utils.py", line 13, in run_all_coroutines
task = asyncio.create_task(coroutine)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/Cellar/python@3.11/3.11.6/Frameworks/Python.framework/Versions/3.11/lib/python3.11/asyncio/tasks.py", line 381, in create_task
loop = events.get_running_loop()
^^^^^^^^^^^^^^^^^^^^^^^^^
RuntimeError: no running event loop
Anything I should look for?
Thanks by the way for that great piece of software you made!
Background
Hello! I'm working to implement plombery based on a request I have to run and monitor a python script on a schedule. This particular script is designed to start at 7AM UTC and run for 23 hours. I'm deploying this app to an Azure Web App service.
Because I am importing another python module that actually holds the logic for the task, I've had to get a bit creative to integrate logging with plombery. Simply passing the logger object to the function didn't work well, so I wrote a function to get the name of the logger just after I call get_logger() from plombery. I then pass the name of the logger to the task and have it use logging.getLogger() to retrieve the logger. Unfortunately, it seems this is still resulting in some bugs related to threading. I've included additional information below, please let me know if you need anything else!
Issue
During the task running, I get MOST of my logs coming through correctly, but also have a significant amount of logging errors showing up in the standard output. A full example of one here:
Additional Issue - Maybe Related?
I saw this before updating to the latest version, but it seems it has returned and might be related. Looks to me like it's the error I'd expect if I was trying to get the logger outside of the task run, which I am not.
Side Note: the dark mode log viewer here has a light font color on a light background, so highlighting is necessary to read. Love the addition of dark mode though!
Code
Web UI Screenshots