Riposte is a Netty-based microservice framework for rapid development of production-ready HTTP APIs. It includes robust features baked in like distributed tracing (provided by the Zipkin-compatible Wingtips), error handling and validation (pluggable implementation with the default provided by Backstopper), and circuit breaking (provided by Fastbreak). It works equally well as a fully-featured microservice by itself (see the template microservice project), or as an embedded HTTP server inside another application.
Java 8 is required.
Please see the template microservice project for the recommended starter template project AND usage documentation. The template project is a production-ready microservice with a number of bells and whistles and the template project's README.md contains in-depth usage information and should be consulted first when learning how to use Riposte.
That said, the following class is a simple Java application containing a fully-functioning Riposte server. It represents the minimal code necessary to run a Riposte server. You can hit this server by calling http://localhost:8080/hello
and it will respond with a text/plain
payload of Hello, world!
.
public class MyAppMain {
public static void main(String[] args) throws Exception {
Server server = new Server(new AppServerConfig());
server.startup();
}
public static class AppServerConfig implements ServerConfig {
private final Collection<Endpoint<?>> endpoints = Collections.singleton(new HelloWorldEndpoint());
@Override
public Collection<Endpoint<?>> appEndpoints() {
return endpoints;
}
}
public static class HelloWorldEndpoint extends StandardEndpoint<Void, String> {
@Override
public Matcher requestMatcher() {
return Matcher.match("/hello");
}
@Override
public CompletableFuture<ResponseInfo<String>> execute(RequestInfo<Void> request,
Executor longRunningTaskExecutor,
ChannelHandlerContext ctx) {
return CompletableFuture.completedFuture(
ResponseInfo.newBuilder("Hello, world!")
.withDesiredContentWriterMimeType("text/plain")
.build()
);
}
}
}
The Hello World Sample is similar to this but contains a few more niceties, and that sample's README.md includes information on some of the features you can expect from Riposte, but again, please see the template microservice project for the recommended starter template project and usage documentation.
Since Riposte is straight Java 8 with no bytecode manipulation, plugins, or other magic required it works seamlessly with whatever JVM language you prefer. Here's the same hello world app from above, but this time in Kotlin:
fun main(args : Array<String>) {
val server = Server(AppServerConfig)
server.startup()
}
object AppServerConfig : ServerConfig {
private val endpoints = Collections.singleton(HelloWorldEndpoint)
override fun appEndpoints(): Collection<Endpoint<*>> {
return endpoints
}
}
object HelloWorldEndpoint : StandardEndpoint<Void, String>() {
override fun requestMatcher(): Matcher {
return Matcher.match("/hello")
}
override fun execute(request: RequestInfo<Void>,
longRunningTaskExecutor: Executor,
ctx: ChannelHandlerContext
): CompletableFuture<ResponseInfo<String>> {
return CompletableFuture.completedFuture(
ResponseInfo.newBuilder("Hello, world!")
.withDesiredContentWriterMimeType("text/plain")
.build()
)
}
}
And again in Scala:
object Main extends App {
val server = new Server(AppServerConfig)
server.startup()
}
object AppServerConfig extends ServerConfig {
val endpoints: java.util.Collection[Endpoint[_]] = java.util.Collections.singleton(HelloWorldEndpoint)
override def appEndpoints(): java.util.Collection[Endpoint[_]] = endpoints
}
object HelloWorldEndpoint extends StandardEndpoint[Void, String] {
override def requestMatcher(): Matcher = Matcher.`match`("/hello")
override def execute(
request: RequestInfo[Void],
longRunningTaskExecutor: Executor,
ctx: ChannelHandlerContext): CompletableFuture[ResponseInfo[String]] =
{
CompletableFuture.completedFuture(
ResponseInfo.newBuilder("Hello, world!")
.withDesiredContentWriterMimeType("text/plain")
.build()
)
}
}
It's been mentioned already, but it bears repeating: Please see the template microservice project for the recommended starter template project AND usage documentation. The template project is a production-ready microservice with a number of bells and whistles and the template project's README.md contains in-depth usage information and should be consulted first when learning how to use Riposte. The rest of the documentation below in this readme will be focused on the Riposte core libraries.
Riposte is a collection of several libraries, mainly divided up based on dependencies. Note that only riposte-spi
and riposte-core
are required for a functioning Riposte server. Everything else is optional, but potentially useful depending on the needs of your application:
riposte-spi
to provide a fully functioning Riposte server.AsyncHttpClientHelper
,
an HTTP client for performing async nonblocking calls using CompletableFuture
s with distributed tracing baked in.
This is a wrapper around the Async Http Client libraries.AsyncHttpClientHelper
, built around the
Ning AsyncHttpClient (which eventually became the new
Async Http Client project which riposte-async-http-client2
is based on).io.dropwizard
version of Codahale metrics.riposte-metrics-codahale
library module.RequestSecurityValidator
, e.g. for basic auth and other security schemes.HttpServletRequest
and HttpServletResponse
adapters for reusing Servlet-based utilities in Riposte.These libraries are all deployed to Maven Central and can be pulled into your project by referencing the relevant dependency: com.nike.riposte:[riposte-lib-artifact-name]:[version]
.
Full documentation on the Riposte libraries will be coming eventually. In the meantime the javadocs for Riposte classes are fairly fleshed out and give good guidance, and the template microservice project is a reasonable user guide. Here are some important classes to get you started:
com.nike.riposte.server.Server
- The Riposte server class. Binds to a port and listens for incoming HTTP requests. Uses ServerConfig
for all configuration purposes.com.nike.riposte.server.config.ServerConfig
- Responsible for configuring a Riposte server. There are lots of options and the javadocs explain what everything does and recommended usage.com.nike.riposte.server.http.StandardEndpoint
- A "typical" endpoint where you receive the full request and provide a full response. The javadocs in StandardEndpoint
's class hierarchy (com.nike.riposte.server.http.NonblockingEndpoint
and com.nike.riposte.server.http.Endpoint
) are worth reading as well for usage guidelines and to see what endpoint options are available. com.nike.riposte.server.http.ProxyRouterEndpoint
- A "proxy" or "router" style endpoint where you control the "first chunk" of the downstream request (downstream host, port, headers, path, query params, etc) and the payload is streamed to the destination immediately as chunks come in from the caller. The response is similarly streamed back to the caller immediately as chunks come back from the downstream server. This is incredibly efficient and fast, allowing you to provide proxy/routing capabilities on tiny servers without any fear of large payloads causing OOM, the whole of Java at your fingertips for implementing complex routing logic, and all while enjoying sub-millisecond lag times added by the Riposte server. To give you an idea of how Riposte performs we did some comparisons against a few popular well-known stacks in a handful of scenarios. These tests show what you can expect from each stack under normal circumstances - excessive tuning was not performed, just some basic configuration to get everything on equal footing (part of Riposte's philosophy is that you should get excellent performance without a lot of hassle).
See the test environment and setup notes section for more detailed information on how the performance testing was conducted.
This test measures the simplest "hello world" type API, with a single endpoint that immediately returns a 200 HTTP response with a static string for the response payload.
NOTE: Spring Boot was using the Undertow embedded container for maximum performance in these tests. The default Tomcat container was significantly worse than the numbers shown here.
Concurrent Call Spammers | Stack | Realized Requests Per Second | Avg latency (millis) | 50% latency (millis) | 90% latency (millis) | 99% latency (millis) | CPU Usage |
---|---|---|---|---|---|---|---|
1 | Riposte | 7532 | 0.138 | 0.131 | 0.135 | 0.148 | 36% |
1 | Spring Boot | 4868 | 0.220 | 0.205 | 0.220 | 0.305 | 34% |
3 | Riposte | 18640 | 0.176 | 0.154 | 0.187 | 0.246 | 92% |
3 | Spring Boot | 11888 | 0.307 | 0.238 | 0.284 | 0.980 | 75% |
5 | Riposte | 22038 | 0.269 | 0.222 | 0.286 | 1.13 | 99%+ |
5 | Spring Boot | 12930 | 0.775 | 0.358 | 1.39 | 8.25 | 84% |
10 | Riposte | 23136 | 1.08 | 0.251 | 3.18 | 8.32 | 99%+ |
10 | Spring Boot | 13862 | 2.26 | 0.552 | 6.61 | 24.24 | 92% |
15 | Riposte | 23605 | 1.38 | 0.429 | 4.39 | 9.56 | 99%+ |
15 | Spring Boot | 14062 | 3.26 | 0.817 | 9.14 | 35.51 | 93% |
This test measures how well the stacks perform when executing asynchronous non-blocking tasks. For a real service this might mean using a NIO client for database or HTTP calls (e.g. Riposte's AsyncHttpClientHelper
) to do the vast majority of the endpoint's work, where the endpoint is just waiting for data from an outside process (and therefore NIO allows us to wait without using a blocking thread).
For these tests we simulate that scenario by returning CompletableFutures
that are completed with a "hello world" payload after a 130 millisecond delay using a scheduler. In the case of Riposte we can reuse the built-in Netty scheduler via ctx.executor().schedule(...)
calls, and for Spring Boot we reuse a scheduler created via Executors.newScheduledThreadPool(Runtime.getRuntime().availableProcessors() * 2)
to match the Netty scheduler as closely as possible. In both cases the thread count on the application remains small and stable even when handling thousands of concurrent requests.
NOTE: Spring Boot was using the Undertow embedded container for maximum performance in these tests. The default Tomcat container was significantly worse than the numbers shown here.
Each call in the tests below has a 130 millisecond scheduled delay before being completed and returned to the spammer, so 130 millis is the ideal latency.
Concurrent Call Spammers | Stack | Realized Requests Per Second | Avg latency (millis) | 90% latency (millis) | 95% latency (millis) | CPU Usage |
---|---|---|---|---|---|---|
700 | Riposte | 5356 | 130 | 131 | 131 | 29% |
700 | Spring Boot | 5206 | 134 | 140 | 143 | 64% |
1400 | Riposte | 10660 | 131 | 132 | 134 | 57% |
1400 | Spring Boot | 8449 | 165 | 181 | 188 | 97% |
2100 | Riposte | 15799 | 132 | 136 | 139 | 80% |
2100 | Spring Boot | 8489 | 247 | 267 | 274 | 99% |
2800 | Riposte | 20084 | 138 | 149 | 157 | 94% |
2800 (Not Attempted) | Spring Boot | N/A | N/A | N/A | N/A | N/A |
3500 | Riposte | 21697 | 160 | 187 | 198 | 99% |
3500 (Not Attempted) | Spring Boot | N/A | N/A | N/A | N/A | N/A |
One of Riposte's endpoint types (ProxyRouterEndpoint
) is for proxy and/or routing use cases where you can adjust request and/or response headers and determine the destination of the call, but otherwise leave the payload alone. This allows Riposte to stream chunks to and from the destination and caller as they become available rather than waiting for the entire request/response to enter memory. The end result is a system that lets you use Riposte as a proxy or router on very low-end hardware and still get excellent performance; payload size essentially doesn't matter - e.g. you can act as a proxy/router piping gigabyte payloads between systems on a box that only has a few hundred megabytes of RAM allocated to Riposte. It also doesn't matter if the downstream service takes 5 seconds or 5 milliseconds to respond since Riposte uses nonblocking I/O under the hood and you won't end up with an explosion of threads.
These kinds of robust proxy/routing features are not normally available in Java microservice stacks, so to provide a performance comparison we put Riposte up against industry-leading NGINX. Riposte does not win the raw performance crown vs. NGINX (it's unlikely any Java-based solution could), however:
ProxyRouterEndpoint
endpoints in Riposte is as simple as the StandardEndpoint
endpoints.Each call in the tests below is proxied through the stack to a backend service that has a 130 millisecond scheduled delay before responding, so 130 millis is the ideal latency.
Concurrent Call Spammers | Stack | Realized Requests Per Second | Avg latency (millis) | 90% latency (millis) | 95% latency (millis) | CPU Usage |
---|---|---|---|---|---|---|
140 | Riposte | 1068 | 131 | 132 | 132 | 18% |
140 | NGINX | 1070 | 130 | 131 | 131 | 6% |
700 | Riposte | 5256 | 133 | 135 | 139 | 77% |
700 | NGINX | 5330 | 131 | 132 | 132 | 21% |
1050 † | Riposte | 7530 | 139 | 150 | 156 | 95% |
1050 (Not Attempted) | NGINX | N/A | N/A | N/A | N/A | N/A |
2240 (Not Attempted) | Riposte | N/A | N/A | N/A | N/A | N/A |
2240 †† | NGINX | 15985 | 139 | 138 | 140 | 57% |
† - Riposte maxed out on these tests at about 7500 RPS. The bottleneck was CPU usage. NGINX wasn't tested at this throughput, but would have performed very well given its max.
†† - NGINX maxed out on these tests at about 16000 RPS. The bottleneck was not CPU usage, but something else. Increasing concurrent spammers simply caused larger and larger bursts of multi-second response times - even at 2240 concurrent spammers there was a small number of outliers which caused the average to jump above the 90th percentile. Throughput could not be pushed above 16000 RPS even though there was plenty of CPU headroom.
-Xmx2607876k -Xms2607876k -XX:NewSize=869292k -XX:+UseConcMarkSweepGC -XX:SurvivorRatio=6 -server
Riposte is released under the Apache License, Version 2.0