perwendel / spark

A simple expressive web framework for java. Spark has a kotlin DSL https://github.com/perwendel/spark-kotlin
Apache License 2.0
9.64k stars 1.56k forks source link

Support of event based (non-blocking) request processing. #549

Open TarlanT opened 8 years ago

TarlanT commented 8 years ago

Currently, for request processing, SparkJava completely relies on HTTP thread pool of Jetty (by default 8 - 200 threads). Which in it's own turn is non-blocking on networking end, but it's blocking on request processing (business logic) side (Handlers/Filters/Matchers...). Any blocking (I/O bound, JDBC etc) operation has a potential to exhaust the Jetty's HTTP thread pool. In that sense, Spark currently is not leveraging existing asynchronous Servlet 3.1 implementation by Jetty. To increase performance potential of Spark, framework needs to add support of event based (non-blocking) processing based on it's own thread pool. Which is currently easily achievable with a combination of Servlet 3.1 + Java 8 CompletableFuture API.
With this combination there is no need for integration with high level Akka or RX frameworks.

Following is a sample code that can actually achieve the goal stated above:

/**
 * Simple Jetty Handler
 *
 * @author Per Wendel
 */
public class JettyHandler extends SessionHandler {

    private Filter filter;
    public JettyHandler(Filter filter) {
        this.filter = filter;
    }

    @Override
    public void doHandle(
            String target,
            Request baseRequest,
            HttpServletRequest request,
            HttpServletResponse response) throws IOException, ServletException {

        if (NOT_ASYNCH) {
            filter.doFilter(wrapper, response, null);
        } else {
            AsyncContext asyncContext = wrapper.startAsync();
            asyncContext.setTimeout(60000);
            CompletableFuture.runAsync(() -> {
                try {
                    filter.doFilter(wrapper, response, null);
                }
                catch (IOException | ServletException ex) {
                    throw new RuntimeException(ex);
                }
            }, executor)
            .thenAccept((Void) -> {
                baseRequest.setHandled(!wrapper.notConsumed());
                asyncContext.complete();
            });
        }
    }
}

The SPARKS_OWN_ASYNC_EXECUTOR above may look like this:

private static final ThreadPoolExecutor executor = new ThreadPoolExecutor(200, 200, 60, TimeUnit.SECONDS, new LinkedBlockingQueue<Runnable>());
    static { executor.allowCoreThreadTimeOut(true); }

If you run a benchmark with significant amount of simultaneously active client sockets, and monitor threads of the active application with change above. You'll see that Jetty will create multiple times less of HTTP threads (you can identify them by "qtp" prefix) than it typically does. An those created will be nicely alternating between being busy and parked. Instead, there will be Spark's own thread pool created witch will have all threads being 100% busy under high load. Which is the exactly the goal with event based approach. And in case on of Sparks own thread to block, then it will not degrade the performance of Jetty's performance.

The idea is to cut the reliance on Jetty's HTTP thread pool. And not force Spark users do following:

public static void main(String[] args) {
    get("/benchmark", (request, response) -> {
        AsyncContext ac = request.raw().startAsync();
        CompletableFuture<Void> cf = CompletableFuture.supplyAsync(() -> getMeSomethingBlocking());
        cf.thenAccept((Void) -> ac.complete());
        return "ok";
    });
}

(This comment was edited by @tipsy, to fix formatting issues)

buckelieg commented 1 year ago

Just to put my 5 cents to create an illusion that this project is not dormant - if you need an async model why just not to use Vert.x? Probably, if you asking for async - then the entire app should follow async model right form the "start"? So that appropriate framework should be used? On the other hand - it would be great to have some "sugar" that covers this functionality right from the spark, but it is not really necessary (from my point for view). I think that there are much more "usable" features that can be implemented instead of this one (like server-sent-events or HTTP 2/3 support etc.)