com-lihaoyi / cask

Cask: a Scala HTTP micro-framework. Cask makes it easy to set up a website, backend server, or REST API using Scala
https://com-lihaoyi.github.io/cask/
Other
525 stars 55 forks source link

Need observability hook for "failed" routes. #87

Closed jsuereth closed 8 months ago

jsuereth commented 1 year ago

I've been trying to set up distributed tracing for Cask, and found a lack of hooks I need to set things up.

A few requests (or help in docs):

If you're curious what we have, here's a simple library I threw together to give Cask the "best possible" OpenTelemetry HTTP experience: https://github.com/GoogleCloudPlatform/scala-o11y-cui-showcase/tree/main/utils/src/main/scala/com/google/example/o11y/cask

The meat of the implementation is all within the @traced decorator

lihaoyi commented 1 year ago

TBH this is something that you will probably have to dig through the code and propose to us, rather than the other way around. The data model and code structure of Cask was not designed for distributed tracing, so it likely does not include all the hooks you need to support it out of the box, though they could probably be added

Decorators that are able to know the http status code of the result

I suspect decorators can already know that, if the inner-decorator provides one, but not 100% sure off the top of my head. If not, it should be add-able

A mechanism to register a decorator for all failed routes (for metrics + traces)

We have hooks to register what to do in case of failure, though not sure if that's sufficiently flexible for what you need to do https://github.com/com-lihaoyi/cask/blob/7392b3e25fbcfbbb506bae26b0b1170b5de00f21/cask/src/cask/main/Main.scala#L53-L61

Some (efficient) way to annotate the size of request/response bytes

Currently, Cask only can create responses, not requests. This is up to the individual decorators, e.g. by default we stream JSON responses to getJSON and postJSON decorators and so don't actually have a good count of how big they are up front. This could be added, e.g. maybe we can materialize the responses in memory up to a configurable maximum size, allowing us to provide up-front sizes for small/medium responses while still streaming large responses.