Open johnml1135 opened 4 months ago
Through making Panel in the Grafana dashboard, Inspecting it and then downloading a CSV, I can get the data (at least 5000 records back). This won't work - I'll need a set of API calls to make Loki viable.
Also, we could make a new table in MongoDB for logs that we could pull and analyze with python (3b). We could make a TTL FIFO in MongoDB, persisting the records for 3 months: https://www.mongodb.com/docs/manual/tutorial/expire-data/#:~:text=TTL%20collections%20make%20it%20possible,MongoDB%20deployments%20in%20the%20cloud. This could give us a "complete enough" picture to both monitor individual projects (if there has been no activity for 3 months, it is inactive) as well as look at trends (how quickly are people asking for new books/chapters/retraining)?
While Loki sounds nice (in having more storage and being a pre-fabbed solution), the LogQL language vs. python would be much more restrictive and hitting the Loki API itself has challenges. If we are ok with only 3 months of logs and with daily or weekly analytics reports (could be auto-run as a cron job) instead of live, there should be no hard limitation with going this route.
@ddaspit - do you have any thoughts?
It seems to me that tracing or logging is the right way to handle this.
Being able to see, for a specific engine, has there been any activity on this engine lately could be helpful for understanding the usage, both for things like Acts2 but also for our own internal teams to monitor usage. Here are a few ideas as to how this may be able to work, and how it could be integrated into a larger monitoring system:
Super simple
Digested Data
Running list of commands run against an engine
If we can make (3c) work, it would be simplest (in terms of data format and data handling), and most robust. It's all about if we can actually make it work...