Closed ololobus closed 1 year ago
I agree that we should use a proper logging library in proxy!
This is probably a wrong place to say the following, but here goes... I'd argue that we shouldn't reuse zenith_utils
here for two reasons:
I don't like the way zenith_utils
is organized at the moment. It aims to be a custom prelude, but such a library shouldn't evolve chaotically, otherwise it will become a poorly organized junkyard. To make a good prelude (or rather, a custom stdlib) you have to raise the bar above all: not every convenient piece should make to it. We probably should split it into specialized crates; this would also facilitate compile times.
By putting all the independent pieces of code together we create lots of false interdependencies between projects. This makes prototyping and refining rather difficult, because you no longer can change something you don't like in just one project. Here's some dubious things:
-l
).I don't like the way zenith_utils is organized at the moment
Yeah, +1. I've tried to use it in zenith_ctl
, but it didn't allow me to log only into stdout (as far as I got it), because logging into the file doesn't make much sense for the Docker entry point.
Not sure, which further steps should be taken to make it more re-usable across sub-projects, though.
BTW, in the console we are thinking about passing the request id between components zenithdb/console#754. It'll provide some cheap tracing, so one can grep logs and track some particular request across several cloud components.
It's already on review in console. Could you please support header X-Request-ID
and log it if it set?
Also I propose to discuss what value we should generate if it missing. I did uuidv4 by now
Could you please support header
X-Request-ID
and log it if it set?
@agalitsyn Just to make sure I understood correctly: do you mean that when the proxy makes a request to the console, we should extract X-Request-ID
from the http response and immediately print it to the log (just once)?
We have http middleware that extract header X-Request-ID
from request and put value into context (think per-request mem storage), or generate this value if header was empty. Since all http requests are logged, we log context value as req_id: <value>
.
Example:
2022-03-14T17:51:02.315+0300 INFO /assets/js/sign_in-5d85097225d5861434bc.js {"status": 200, "method": "GET", "path": "/assets/js/sign_in-5d85097225d5861434bc.js", "query": "", "ip": "::1", "user-agent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/98.0.4758.109 Safari/537.36", "latency": 0.0117985, "time": "2022-03-14T14:51:02Z", "req_id": "a2283a45-575e-4607-a281-0eddf91e0aa1"}
client call
curl 'https://console.stage.zenith.tech/api/v1/clusters' \
-H 'X-Request-ID: a2283a45-575e-4607-a281-0eddf91e0aa1'
Now we can grep console logs with that request id, or use Loki QL in grafana.
Implement the same middleware in all http services. After that we will be able to use that request id for searching between multiple services.
After a quick private discussion with @agalitsyn I came up with the following proposal: we probably should generate a uuid4 per each client connection to the proxy and set it as a context for a logging library, then use this same value for all requests to console. Fortunately, we only need to do a couple of requests (for auth and wakeup) which logically belong to this connection, so I don't see any problem with that. Furthermore, we could also support custom connstring parameter for id customization (I can see how providing your own value would facilitate online debugging: you won't need to grep logs using the wall clock time to find out which uuid would've been assigned to you).
Fixed in #2554
Currently proxy just throws stuff into stdout like:
without a timestamp or message level.
It's hard to debug and inspect what is happening from the logs. I propose to use at least
env_logger
. Probably we can get use ofzenith_utils::logging
here.cc @funbringer