sunng87 / ring-jetty9-adapter

An enhanced version of jetty adapter for ring, with additional features like websockets, http/2 and http/3
Eclipse Public License 1.0
273 stars 48 forks source link

feat: Jetty 12 #106

Closed sunng87 closed 1 year ago

sunng87 commented 1 year ago

We are porting this adapter to jetty-12 and removing servlet staff completely.

Fixes #105

jasonjckn commented 1 year ago

Niceee work!

sunng87 commented 1 year ago

I'm getting trouble with websocket while doing the upgrade. I doubt if it's possible to implement our current websocket APIs without ee10 package. We mainly relies on ee10's websocket container upgrade to upgrade the request in-place. The core package's websocket API doesn't offer this ability so we might need rebuild the container APIs in our project, which does not sound like a maintainable approach.

sunng87 commented 1 year ago

@jasonjckn Let me know if you are willing to join the work, I can open write permission for you.

jasonjckn commented 1 year ago

@jasonjckn Let me know if you are willing to join the work, I can open write permission for you.

Is there non-websocket work , i may have bandwidth for that, i've zero experience with web sockets, nor really find them that interesting due to HTTP/2&3 satisfying all my streaming needs.

My self-interested desires right now (for using this at work) are quite modest, since out webserver is behind an istio/envoy proxy, i just need http/2 and vthreads basically , not even TLS

sunng87 commented 1 year ago

I just sent my questions to upstream https://github.com/eclipse/jetty.project/issues/10353 and it seems possible to port the websocket to jetty-core.

sunng87 commented 1 year ago

@jasonjckn it makes sense. I think there is no roadblock for getting sync handler if I temporarily disable websocket. There are remaining work for:

jasonjckn commented 1 year ago

@jasonjckn it makes sense. I think there is no roadblock for getting sync handler if I temporarily disable websocket. There are remaining work for:

  • [ ] websocket over jetty-core

  • [ ] async ring handler

  • [ ] getting client certificates from jetty-core's request

+1 disable websockets

--

if i have time i'll take a look at #2 and/or #3 (i assume this is mTLS)

my schedule is pretty nuts right now, i'll aim for this weekend to set aside a few hours

sunng87 commented 1 year ago

No problem. I will disable websocket by this weekend then we can collaborate on this branch. You can simply send pull request to feature/jetty-12.

sunng87 commented 1 year ago

I just disabled websocket and async handler on this branch. The example lein with-profile example-http run now works on jetty-12.

I will be working on new websocket API verification from a new branch this weekend. https://github.com/eclipse/jetty.project/pull/10354

sunng87 commented 1 year ago

WebSocket API has been merged. But we will need to wait for merge of https://github.com/eclipse/jetty.project/pull/10354 and 12.0.1 release. At the moment, to build the branch we will need to build that jetty branch locally.

sunng87 commented 1 year ago

Just revert the websocket PR and we can work on other features on this branch. Websocket will be merged once 12.0.1 released from jetty.

sunng87 commented 1 year ago

Jetty 12.0.1 has been released, I'm pulling in websocket and async

sunng87 commented 1 year ago

@jasonjckn I have finished upgrade, could you please test if project loom works for this version when you get time?

jasonjckn commented 1 year ago

amazing, yah i'll test right now

jasonjckn commented 1 year ago

@sunng87

Everything works on my end. Virtual threads are working. Also, all I had to was bump dependency version, very nice!

The -only- thing I noticed was performance seemed a bit less than v11 .

WITH ;; info.sunng/ring-jetty9-adapter {:mvn/version "0.22.1"}

⚡ wrk --latency --timeout 1m -d 10s -c 1000 -t 1000 http://localhost:3000/api
Running 10s test @ http://localhost:3000/api
  1000 threads and 1000 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency    14.84ms   14.80ms 225.59ms   93.85%
    Req/Sec    78.65     38.18     6.55k    87.70%
  Latency Distribution
     50%   12.40ms
     75%   13.20ms
     90%   22.16ms
     99%   86.94ms
  783952 requests in 10.11s, 92.71MB read
Requests/sec:  77577.46
Transfer/sec:      9.17MB

WITH info.sunng/ring-jetty9-adapter {:mvn/version "0.30.0-SNAPSHOT"}

 wrk --latency --timeout 1m -d 10s -c 1000 -t 1000 http://localhost:3000/api
Running 10s test @ http://localhost:3000/api
  1000 threads and 1000 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency    18.43ms   10.33ms 193.54ms   89.67%
    Req/Sec    57.32     15.48     1.00k    88.15%
  Latency Distribution
     50%   17.40ms
     75%   17.74ms
     90%   20.59ms
     99%   67.97ms
  576627 requests in 10.11s, 78.09MB read
  Socket errors: connect 0, read 35, write 0, timeout 0
Requests/sec:  57060.34
Transfer/sec:      7.73MB

Not even that significant, but something we can sort out later if you want after a release with some profiling.

jasonjckn commented 1 year ago

code.zip if you want it

i'm going to spend 20 minutes profiling, see if I find anything obvious.

sunng87 commented 1 year ago

On my box it's 587krps (0.30) vs 665krps(0.22) with your reflection fix.

This is weird because as we removed servlet api it should have less code to execute. But we may need to test plain jetty 11 vs jetty 12 as a reference.

sunng87 commented 1 year ago

Did a profile based on your benchmark (with virtualthreads turned off):

0.22/jetty 11 image

0.30/jetty 12 image

There is no obvious conclusion I can come up with results. The difference seems to be with muuntaja.middleware/wrap-format-negotiate which takes 10 more percents in 0.30

profile.zip

18-35 was based on 0.30, 18-36 for 0.22

sunng87 commented 1 year ago

Just benchmarked plain setup of Jetty 11 and Jetty 12 https://gist.github.com/sunng87/c3439bd1dbafc50062c520ee529929a4

12 should be at least 10% faster than 11. So the issue must be on clojure side.

joakime commented 1 year ago

Hi, Eclipse Jetty here. We are interested in your findings about performance. Don't be too quick to blame anything specific yet (eg: clojure). Lets dig into this together!

sunng87 commented 1 year ago

I think I've got the differences:

Response from Jetty 12 test uses chunked encoding:

❯ curl -v http://localhost:3000/api
* processing: http://localhost:3000/api
*   Trying [::1]:3000...
* Connected to localhost (::1) port 3000
> GET /api HTTP/1.1
> Host: localhost:3000
> User-Agent: curl/8.2.1
> Accept: */*
> 
< HTTP/1.1 200 OK
< Server: Jetty(12.0.1)
< Content-Type: application/json; charset=utf-8
< Transfer-Encoding: chunked
< 
* Connection #0 to host localhost left intact
{"status":"ok"}%

For Jetty 11 it was fixed sized:

❯ curl -v http://localhost:3000/api
* processing: http://localhost:3000/api
*   Trying [::1]:3000...
* Connected to localhost (::1) port 3000
> GET /api HTTP/1.1
> Host: localhost:3000
> User-Agent: curl/8.2.1
> Accept: */*
> 
< HTTP/1.1 200 OK
< Content-Type: application/json;charset=utf-8
< Content-Length: 15
< Server: Jetty(11.0.15)
< 
* Connection #0 to host localhost left intact
{"status":"ok"}%

The client taking more time to parse chunked response is the key for performance regression. Our original implementation is using OutputStream as well: https://github.com/sunng87/ring-jetty9-adapter/blob/master/src/ring/adapter/jetty9/servlet.clj#L65 . The body to stream implementation is from ring and should be the same too https://github.com/ring-clojure/ring/blob/master/ring-core/src/ring/core/protocols.clj#L20

Calling close on Jetty's output stream should return fixed sized response. But Ring seems to do that correctly.

lorban commented 1 year ago

@sunng87 (Eclipse Jetty team here too) Sorry if that was unclear, but when you're using the Core API, Jetty 12 does not buffer the responses, so it cannot calculate and add the Content-Length response header for you, unlike servlets which always do buffering.

If you know the length of what you're going to write before writing it, you can manually add the Content-Length response header, and Jetty won't use chunked encoding anymore.

If you do not know that length, unfortunately, there currently is no way to perform the same kind of buffering that the servlets do with the Core API. There is an open issue about this: https://github.com/eclipse/jetty.project/issues/10476, so if you're in this case, please let us know and we'll prioritize its implementation.

Thanks!

sunng87 commented 1 year ago

@lorban Thanks for the explaination. At the level of ring's abstraction, we can have the length of response only if we implement buffer mechanism. I think it would be nice to have default buffering in Jetty-Core as it already worked for servelts.

sunng87 commented 1 year ago

After adding BufferedResponseHandler as suggested by @lorban , my benchmark shows our Jetty 12 adapter is up to 10% faster than Jetty 11.

I think we are ready to merge this. The only option remaining to support is async-timeout but I'm going to do it in next patch.

@sbordet I saw there is a IdleTimeoutHandler in Jetty-Core but it's content is in TODO state. And do you think it can be used to implement the same functionality of AsyncContext.setTimeout ?

lorban commented 1 year ago

@sunng87 Great, I'm glad adding the buffering handler to generate the Content-Length header worked as expected!

Just one extra note about BufferedResponseHandler: please note that it is a bit rough on the edges and could do with some internal improvements. But there is one thing in particular you have to pay attention to: it will buffer up to 2 GB per request by default!

Unfortunately, there is no straightforward way to change this setting -this is an improvement we must work on- but in the meantime, you could wrap it in yet another handler to force a more reasonable buffer size. Here is how you could do that in java:

Handler bufferSizeLimitingHandler = new Handler.Wrapper(new BufferedResponseHandler(syncHandler))
{
    @Override
    public boolean handle(Request request, Response response, Callback callback) throws Exception
    {
        request.setAttribute(BufferedResponseHandler.BUFFER_SIZE_ATTRIBUTE_NAME, 16384); // limit the buffer to 16K
        return super.handle(request, response, callback);
    }
};
sunng87 commented 1 year ago

@lorban Thanks for the information. I can add them in my Sync/AsyncHandler. It would be nice to be able to configure a default limit from its constructor.