http4s / http4s-servlet

http4s-servlet support
Apache License 2.0
6 stars 5 forks source link

Performance regression in 0.23.12 #157

Open bastewart opened 1 year ago

bastewart commented 1 year ago

We suspect there is a performance regression in v0.23.12. This was initially noticed by updating http4s-jetty to 0.23.12 which brings servlet 0.23.13 in transitively.

We've observed a ~15-20% performance hit in a production service (increased CPU to handle the same load). Sorry, this is both pretty unscientific in benchmarking terms and we haven't yet isolated the cause (an exact commit). We are almost entirely certain on the release though.

We've isolated this to the 0.23.12 release of this library by manually bumping the servlet version (and keeping all other dependencies constant). 0.23.11 appears fine, 0.23.12 and 0.23.13 show the reduced performance.

In addition it does not appear that http4s-jetty (or jetty itself) are the cause. Running http4s-jetty 0.23.12 alongside servlet 0.23.13 shows the same performance as http4s-jetty 0.23.11 and servlet 0.23.13.

@TimWSpence is currently looking to isolate the commit!

rossabaker commented 1 year ago

It would make me very happy to find out it's not https://github.com/http4s/http4s-servlet/pull/50, but I'd look there first.

bastewart commented 1 year ago

That's where our suspicion fell internal as well! 😬

rossabaker commented 1 year ago

Make sure it's cats-effect-3.4. Transitively, it should be.

We could play with the type of queue. I didn't rigorously performance test, and apparently should have. I think this implementation is much nicer, so I'd hate to have to revert, but we could if we can't fix this.

armanbilge commented 1 year ago

I was chatting with @timwspence about this, their current suspicion is that a large number of fiber allocations are at play.

If you are using a "parallel" Dispatcher, then each of these unsafeRunAndForget will start a new Fiber. https://github.com/http4s/http4s-servlet/blob/cdff010506e9c39ddd70d1f7612bf305958c00c2/servlet/src/main/scala/org/http4s/servlet/ServletIo.scala#L133-L142

However, if you create a "sequential" Dispatcher, then all those effects will be run sequentially on the same fiber. In this case you would want the Dispatcher to be local to the request.

bastewart commented 1 year ago

Also just going to confirm here (for the record) that @TimWSpence did isolate this to #50. He confirmed this on discord though, hence no message here. Everyone in this thread is aware (I think!) though.

rossabaker commented 1 year ago

A request-scoped, sequential dispatcher sounds reasonable to me. I took a quick look and it's going to be a moderate pain to fix in a binary compatible fashion, but I think it can be done.

rossabaker commented 1 year ago

It's late and I'm getting stupid, but, uh, who dispatches the dispatcher?

  override def service(
      servletRequest: HttpServletRequest,
      servletResponse: HttpServletResponse,
  ): Unit =
    Dispatcher.sequential[F].use { dispatcher =>
       ???
    }

I think the servlet needs a parallel one, and each request can get a local, sequential one?

armanbilge commented 1 year ago

Heh, that sounds about right. Each request would get its own fiber (from the parallel dispatcher) on which it starts a single-fiber sequential dispatcher.

TimWSpence commented 1 year ago

It would make me very happy to find out it's not #50, but I'd look there first.

Sorry Ross 😛 I just opened https://github.com/http4s/http4s-servlet/pull/171. Hope both the description and the changes make sense!