OpenLiberty / open-liberty

Open Liberty is a highly composable, fast to start, dynamic application server runtime environment
https://openliberty.io
Eclipse Public License 2.0
1.13k stars 579 forks source link

Liberty Reactive Connectivity With Back-Pressure Using MP Reactive Messaging Connector #8962

Closed atosak closed 3 years ago

atosak commented 4 years ago

[The AHA doc for this is now stale, this is the primary source for this feature] The goal of this feature is to enable Liberty to be used in heterogeneous Reactive Systems, containing a combination of Liberty and vert.x microservices supporting back-pressure between them.

MVP Requirement

The MVP requirement is to get a 'reactive' connection between two Liberty servers exposed to the user as a MicroProfile Reactive Messaging Connector.

It would not expose any of the underlying implementation (e.g Vert.x) APIs to the user.

There is a hard requirement to not open independant ports, threads, file IO and so on that is outside of the control of the Liberty server's configuration.

Vert.x bundling into Liberty as a MicroProfile Reactive Messaging Connector looks like it could be a good solution for this by implementing (Liberty(vert.x))<--->((vert.x)Liberty) (Theoretically, the remote server could be anything that uses the Vert.x Eventbus or any of the Vert.x Eventbus 'bridge' protocols.)

Concerns

How stable is the Vert.x api? Done as a MicroProfile Reactive Messaging Connector, none of the Vert.x specific API is exposed to applications and the programming model is very easy to use. As we would bundle a particular version we would be insulated from version to version changes in the API used by our implementation.

Bundle or User BYOJar?

Bundling 'into' Liberty enables us to police in-bound and out-bound security, threading and other resources an app server is supposed to add value on. A user application just packaging and using Vert.x jars would have incoming work unsecured and utilising dependancies, threads, ports and so on without any server oversite.

Why Use Vert.x To Satisfy this Requirement?

Additional side effects of implementating that is that it would contribute towards...

Further Architectural Alignment with Quarkus

"Basically, Quarkus uses Vert.x as its reactive engine. While lots of reactive features from Quarkus don’t show Vert.x, it’s used underneath."

SmallRye Reactive Messaging Connector Dependancies

Vert.x is an enabling dependency for most of the SmallRye Reactive Messaging Connectors which have a loadable-by-CDI architecture. Testing and Enabling these would be done under a seperate epic. (The available connectors have end user documentation here: https://smallrye.io/smallrye-reactive-messaging/ ) Part of the investigation work here is make a bundled Vert.x be on the classpath of (shipped) SmallRye Reactive Messaging connectors but not expose its APIs externally to customers (normal bundling export/import). Connectors that users write themselves (I am not sure these will exist, let alone ones that use Vert.x as a dependency) will not be able to see the 'internal'/shipped Vert.x classes on their classpath.

Enabler for Reactive Liberty-Quarkus and Liberty-Vert.x Connectivity

Homogeneous Liberty-Liberty is (this feature) and Hereogeneous Liberty-Redhat/Quarkus ​ https://quarkus.io/guides/using-vertx​ - this may 'just work' due to this feature but the work in testing blogging etc would be done under a seperate deliverable

In future one could an application pattern made up of MicroServices where some are lightweight Quarkus services and some are more fully featured APIs/state-holding and hosted by Liberty. Connecting Liberty and Quarkus microservices easily in a manner that is internally resilient to stressful traffic loads enables uses to use Quarkus where it suits and use Liberty where it suits and have less reason to opt for a homogeneous solution.

Liberty-Lightbend

Vert.x may play a role in Liberty/Quarkus-Lightbend thus cost sharing with Redhat which uses the Vert.x EventBus for Quarkus-Quarkus reactive connectivity. but it is not obvious that this is available 'off-the-shelf' currently. If we were to build this using a technology that is not shared with Quarkus we would have to pay to do it twice (if Quarkus wanted to do it at all). See: "example of taking a publisher from some other reactive streams implementation (e.g. Akka) and pumping that stream to the body of a server side HTTP response" This would be done under a seperate epic.

Increases Connectivity Options

  1. A cheaper cost base foundation for other stack technologies, a near example being RSocket support via https://github.com/vert-x3/issues/issues/481​ rather than paying to build RSocket support (for example) on our own. This would be done under a seperate epic.

The deliverable of this feature/epic will be the minimum amount required to enable a Liberty based microservice on one server to talk to a either Liberty (or a Quarkus) based remote microservice (i.e. both work) in a way that does reactive 'back-pressure' between them and so fullfill the "reactive systems" end-to-end narrative.

Use/Business Case

This supports having an application that is a mixture of Quarkus and Liberty based services and for all the links between the microservices having reactive (asynch, non-blocking, back pressured) connections, i.e. there is no need to replace Liberty services to get end-to-end back pressure.

The other topics are 'bare this in mind while making decisions' requirements. That can be covered with other delivery epics.

  1. A subsequent epic could expose an API (as opposed to the MP RM Connector Annotation) The 'programing model' would be based on the API in Quarkus documented at https://quarkus.io/guides/using-vertx We belive the MVP would be the 'Axle' API based on CompletionStage and Reactive Streams. This would be done under a seperate epic.

The MVP for this epic will be based in input from OM and the architectural approach set by Alasdair et al.

The best (or no) packaging will be determined during the WAD design work. Vert.x is highly modular so we will need to determine Vert.x.core + {what} (what==minimum for MVP requirement) and also measure this against a user 'BYO-Jar' option problems.

Alternatives Considered

JAX-RS 2.1 rx() with SSE (ala RESTEasy)

An alternative approach for the requirement would be to build on top of the JAXRS 2.1 rx() reactive extensions where we could provide a MP reactive streams operators PublisherBuilder extension using Server Sent Events (SSE) for the back-pressure channel. (Compare with @Stream at ​https://docs.jboss.org/resteasy/docs/3.5.1.Final/userguide/html/Reactive.html )

As that would be 100% in the user space using external JAXRS 2.1 API we should do that too but as a blog+repo - as it would demonstrate the power of jaxrs.rx() to and MP RSO to customer better. I will chat to Andy McCright about this.

(See also PromiseRxInvoker in 151.9.2 of https://osgi.org/download/r7/osgi.enterprise-7.0.0.pdf ) Another aspect is that if this drags in some derivative of netty then we need to 'air' it with other components that might have a roadmap that could involve netty in the future.

RSocket Feature

Another alternative approach considered (but NOT in the scope of this feature) would be for us to develop our own RSocket based feature reusing the maven RSocket Java jars.

hutchig commented 4 years ago

After early investigation of the Vert.x internals - we (Ian, Alasdair, Jeremy, squad) discussed if fullfilling this requirement as a Liberty bundled feature- that replaced the use of Netty with Liberty's Channel Framework would be cost effective relative the lack of immediate and specific customer demand. (Vert.x uses Netty extensively so removing it creates a good chunk of work to replace it using Libert primitives.

An additional outcome of the investigation was that the Vert.x project is composed of a large number of JARs and having worked out which subset of these jars this feature would use/pull in and the Vert.x jars that are dependencies of each of the SmallRye Reactive Messaging Connectors - then this work would not enable any additional SmallRye connector's dependancy needs being met (for example the Kafka one) - so that motivation for making this a Liberty feature is not as strong as hoped.

Due to the above, we have decided not to deliver this as a Liberty feature at this time. Instead, we will explore fullfiling the requirement in 'user space'. So...

Implement a Connector as an end user would (the spec is written to support this but we have not tested/documented this - this connector can then make use of 'vanilla' Vert.x to send Vert.x events. 1) The (documented) ability to write your own Connector as part of your apps utility code/shared library mitigates the fact that IBM only ships a small number of them as features. 2) It will also augment our (slightly thin currently) doc on using Reactive Messaging. 3) It can contribute to the Liberty-Quarkus-Vert.x connectivity story at less cost and with Vert.x tech that is more easy to migrate upwards as new version arrive.

hutchig commented 4 years ago

The deliverable for this will issue (if @NottyCode is happy with this) will now be a (blog) article and an associated repo with code that documents using reactive messaging via, implementing a Connector as a Liberty customer can, connecting that to a remote Vert.x (or Quarkus) system with Vert.x Events across a clustered event bus with back pressure.

A non-goal (without further approval) but something to keep in mind is to develop experience with the clustered Vert.x Event bus in/out of a Liberty application. This needs a Cluster Manager (probably using Infinispan in our example here) to work across processes/services. It would be good to understand what would be needed to replace the cluster manager SPI with an OpenShift operator. (Not having a cluster manager that works in GraalVM has held back Quarkus using the Vert.x clustered event bus... and thus prevented us(IBM) using Vert.x Events to connect Liberty/Quarkus/Vert.x. If we ever wanted to replace GraalVM, having a Vert.x cluster manager (and client API jar) (for example one written using an OpenShift Operator implementation) that we (IBM) can make work on GraalVM.next (whatever that is.. 'J9.native' etc.) will be an advantageous position. (The cluster manager SPI for Vert.x is simpler than one might think: https://vertx.io/docs/apidocs/io/vertx/core/spi/cluster/ClusterManager.html - of course it is the robustness/restart/state-recovery/hardening etc that would be hard if not implmented on top of something that already does that.

hutchig commented 4 years ago

I will chat to @lauracowen about the packaging/format of the text/code deliverable.

hutchig commented 4 years ago

I have today chatted to Laura about the initial format/delivery. She is fine for this to be initially via github text/code (at least until we understand how well it can made to work).

gcharters commented 4 years ago

@hutchig does the fact we now have to integrate Vert.x due to it becoming a hard dependency from SmallRye, change the direction for this work?

hutchig commented 4 years ago

@gcharters I was having a chat to Grace and Jason about this this morning. One might think that If Vert.x core is in a Liberty feature it makes it more accesssible to write/use a Vert.x EventBus connector but that needs clustered Vert.x and the need to open a Vert.x port in Liberty which is not currently planned. So no.

As I said to Grace this morning: As I understand it... This had two things that made it less attractive - the Vert.x event bus back-pressure is TCP/IP back-pressure as they have no ’pull/request(n) protocol across the wire - relying on TCP/IP congestion flow control and that could leave some messages on a reciever that stops asking for more in between the receiving Netty Channel and the Vert.x layer's ‘pump’ that acts as a Publisher and is request(n) controlled. Also, at the time, Quarkus were not going to support Vert.x remote events (via Vert.x clustering as there was no cluster manager that worked well with GraalVM (this may be different now)) so it stopped being a way to talk Liberty-Ver.x-Quarkus. So became less attractive.