Open ydewit opened 6 years ago
...but are there any plans, thoughts or discussions on potentially allowing for the deployment of multiple, independent services/components on a single Micronaut application?
I don't think I understand.
You can currently include in your deployable jar as many things as you like. You can also configure beans to only load in certain environments (for example, only load this bean if running on AWS, etc...).
Is that relevant to what you are asking?
Hi @jeffbrown, that is correct, but they assume that these 'components' are built and developed together and with a single dependency tree. In the case above, I have completely independent services with potentially conflicting dependencies, but I want to package/bundle them together in a single executable.
We can do this today with a Servlet Container (e.g. Tomcat) and one or more WARs. These WARs can be developed by completely independent teams and have completely different and potentially conflicting dependencies. All this is handled by the Servlet container by isolating each WAR within their own ClassLoader's.
So in a nutshell, I would like Micronaut to replace Tomcat/Servlet Container in my case but still be able to package multiple independent components together as one executable.
Does this clarify it a bit better?
I think what you want is an executable jar contains multiple entry points (main
methods in multiple classes) and a different class path configured for each of those entry points. Is that correct?
If I understand correctly what you are describing, each of these components would have their own Netty end point, right? That would be one way of solving it although less interesting since it would duplicate resources that could be shared.
I was thinking more along the lines of what a Servlet Container provides Today: i.e. a single HTTP thread pool shared among multiple, deployed WARs. Since we are not talking about the Servlet API here, there would still need to be the notion of a 'controller bean' (e.g. with an annotation) that is provided by each component jar but it is hooked up to the shared Netty instance.
I guess what I am looking for requires a dynamic binding at runtime even though the executable and the components could be independently and statically compiled by micronaut.
One difference between what you want and what a servlet container does is a servlet container provides the behavior you are talking about on a per deployable (per .war file, for example) basis. What you are talking about is an environment that creates environments for subsets of a particular executable .jar file.
The solution I had in my mind around this was something along the following lines (note that I know very little about Micronaut).
Say I have two micronaut microservices A and B. They build A.jar/B.jar for non-executable component parts, and A-exec.jar/B-exec.jar for the executable jars (with an embedded Netty). I use A-exec.jar and B-exec.jar to run as separate docker containers in the cloud. This is basically what SpringBoot provides us today.
This works well in the cloud except for the bit of duplication: each microservice has it's own settings/classes for the executable part (granted that the reuse here can be improved extracting jars and Maven BOMs). Ideally, I would like to have a standard executable component that can run any number of these components.
java -jar /container-exec.jar -Dcomponents.dir=/components
/container-exec.jar
/components/A.jar
/components/B.jar
Each jar, i.e. container-exec, A and B, there is little to no reflection and they use micronaut static injection, etc. A and B use Java modules to hide internal dependencies (btw: I realized recently that Java modules does not support multiple versions of the same dependency across modules) although that is just a nice to have. When container-exec runs, it loads A.jar and B.jar and dynamically hook up their Application/Context class (this is the only part that is really dynamic).
In the cloud, I can run a docker container with container-exec.jar and A.jar, then another one with container-exec.jar and B.jar. For on-prem, I can bundle A.jar and B.jar with container-exec.jar and have them run as a single OS process.
I am aware that this is an edge case but wanted to throw it out here anyway. I may be better served by making sure I can compile A and B withing a single executable jar, but sometimes components evolve at different paces and may not always be compatible with one another.
thanks for listening
What original poster wants is an architectural solution that merges a smart monolithic.
A monolith that has spring boot profiles like switch to enable different set of APIs from same deployment.
As far as his exact requirements is concerned, I believe one shall not go into touching servlet API for the sake of deployment of wars.
I believe his probable solution is Armeria Microservice Framework from Line Messenger Corp.
Armeria allows adding up wars. https://line.github.io/armeria/
Perhaps covered on another issue, but seemed a good discussion. Along these lines, I'd like to break up my environment YAML file, slicing by functionality. For example, say I have this source layout:
(To be concrete, "lib-P" would be database functionality, needed by some microservices, but not by others.)
Coming from Spring Boot land (and happy to see Micronaut), I'd use the Spring profiles include mechanism. So "bin-B" might have this in application.yml
:
spring:
profiles:
include:
- p
- q
Where as "bin-A" use just "P", and "bin-C" just "Q". The build.gradle
dependencies would reflect these relations.
This lets me slice configuration by what it is for, so identical configuration is kept in just one application-p.yml
or application-q.yml
file.
I know I maybe can do this with command-line flags for the applications, but would rather be explicit in each program's application.yml
as to what slice of configuration it expects.
I tried several approaches, none seemed to do it. For example in application.yml
for "bin-B":
micronaut:
environments:
- p
- q
When I ran an application test for "bin-B" in this example, and debugged into new DefaultEnvironment(...)
to see environments
variable contains only "test", and neither "p" nor "q".
I can explicitly say @MicronautTest(environments = ["p", "q"])
in my application test, and it worked, but it's error-prone to add this to every micronaut test for "bin-B".
Another working approach is to add the environments with JVM args in build.gradle
. This is also error-prone. For example, when adding a new environment, local tests would pass, but a developer may forget, say, to update Dockerfile
, and the problem isn't found until deployment (possibly through a slow CI to AWS or Google Cloud, etc), and Kubernetes brings more points of failure to the mix.
For me, the kind of application slicing @ydewit discusses needs configuration as one of the considerations.
@binkley what you want to do is already possible by customizing how you start your app.
Micronaut.build()
.environments("p", "q")
.start();
As far as I know there is no feature within Spring Boot that Micronaut doesn't support that would prevent this kind of architecture.
@graemerocher Thank you, I'll give that a go. I'm used to using spring.profiles.include
mechanism to slice my YAML files by vertical feature (ex: JSON logging), and including features by environment.
Some notes:
main()
from which to build a Micronaut
-Dmicronaut.environments=...
after setting them in the code (perhaps I did something wrong?)For my team, we use a certain slice of profiles for local development (for example, "text" request/response logging), another in deployed envs (for example, "json" request/response logging for ElasticSearch), and some always (in our project, "common" environment). In deployed environments, we use the env name (example, "qa") as the (Spring) profile name, and our profile is defined as:
spring:
profiles:
include:
- json
- <other deployment-related profiles>
Using the YAML files to organize this keeps our local/deployment configurations simpler.
If it helps, my actual actual example is captured here: https://github.com/binkley/basilisk-kt/commit/2437404004d93a7ad404aeae0bb0b0f83f0f38f8
In summary:
In both before and after, an application-test.yml
overrides datasources.default
.
Before:
Keep datasources.default
in a shared application-db.yml
for "bins" (deployable programs) to use via Micronaut environments, and include in Gradle:
test {
systemProperty "micronaut.environments", "app,db"
}
However, @MicronautTest
is not picking that up, and the actual profiles list is only "test".
After:
Return to each "bin" duplicating the datasources.default
block in their own application.yml
.
I could not find any other forum to discuss this first, so decided to create an issue instead. I hope this is not a problem.
The reason I am here is that I like the design space Micronaut is trying to address, namely fast and low memory footprint microservices (no or as little runtime reflection as possible).
We use SpringBoot for this at this point and although it has served us well from a framework point of view, it is a bit on the heavy side when dealing with memory, specially when the number of microservices increase.
However, we also need to deliver the same set of services on premise as part of a Tomcat based bundle where the microservices are deployed as independent WARs (we build executable WARs with SpringBoot so we have these two options deployment options available). This is suboptimal since we would like to have a single executable that runs in the cloud and on premise. With SpringBoot, we could, for instance, manually load additional WARs (it is using an embedded Tomcat) and be done with it.
I know that Micronaut's focus (as is SpringBoot's) is to allow independent microservices deployments (one executable per microservice), but are there any plans, thoughts or discussions on potentially allowing for the deployment of multiple, independent services/components on a single Micronaut application?
Ideally, it would be nice to leverage the new Java module system to build 'servler-less' (sorry for the overloaded term -- executable-less) components/services that can be packaged into a Micronaut executable (the ones that has the embedded web server, metrics, etc). This way we can standardize logging, monitoring, metrics, http server configurations into a single project and reuse that to package one executable per service or an on-prem executable with N services. I understand that something like this will require a bit of an interface/protocol to hook up HTTP handlers, metrics, etc to the main executable.
Curious to hear your opinions on this with respect to Micronaut.