Open bossie opened 1 year ago
We might consider a pragmatic approach and say: this synchronous request that we started a minute ago, succeeded, and therefore the logs of the last minute should contain these log entries.
some other quick ideas:
How would you allow the client (programmer) to set the request header? Keep it in a global/per-connection variable or are there better ways?
you can do both:
.default_headers
dict on the connection object for "global" approachheaders
arg (a dict) to con.get()
, con.post()
, ... for per request approach
Quite a bit of effort has been made to get centralized logging in place, of all components that make up our OpenEO back-ends, in Python and Java, in Spark drivers and executors, in web app and batch job contexts, with IDs such as user ID, request ID and job ID to correlate them.
Recently however, the logging infrastructure itself (filebeat, logstash etc) has been a bit unreliable for reasons not entirely known and logs were not visible in Kibana.
Maybe the current integration tests can be extended to make sure that our logging still behaves, both application-wise and infrastructure-wise.
For batch jobs, the /jobs/{job_id}/logs endpoint can be used.
For synchronous requests this is not so simple, because the request ID is not propagated to the client unless the request fails, in which case the request ID is returned in the error message. It is therefore not possible to look for logs that pertain to this particular request. Even if we always return this request ID in a response header, regardless of its outcome, a question that remains is: how will the openeo-python-client expose this request ID so we can fetch the corresponding logs?