Open dgarnitz opened 9 months ago
Scope this out. We need this upgrade to solve this issue.
job_id
and batch_id
are trackedapi
, worker
and extractor
so that whenever an error is logged, it is also saved to the database. add a method to a utils file, save_error()
that does this and use the util method in each filebatch_id
. return a JSON object with a field errors
that is an array of stack tracesjob_id
. return a JSON object with a field errors
that is a dictionary featuring batch_id
as the key and an array of stack traces as the valueNo need for kibana or prometheus, just store in the DB
VectorFlow has many logs spread of over different containers. We need these logs to be aggregated into a searchable form.
One option could be to use Kibana with Elastic Search. If the logs have metrics in them, we may want to put those into prometheus