Closed gadieichhorn closed 7 years ago
The idea was to actually not run into the docker trap, but keep to Karaf as container. Though it should be straight forward with Karaf and hazelcast.
ok, thanks :)
Actually I think creating a sample with two dockerized Karafs for Hazelcast cluster demonstration might be useful ... :)
Great!
I was thinking we need to define the cluster programmatically and then use it to register a clustered vertx. I assume some ports have to be opened on Docker and we might need a separate cluster manager docker instance but I am not sure.
TcclSwitch.executeWithTCCLSwitch(() -> {
Config hazelcastConfig = new Config();
ClusterManager mgr = new HazelcastClusterManager(hazelcastConfig);
VertxOptions options = new VertxOptions().setClusterManager(mgr);
Vertx.clusteredVertx(options, cluster -> {
if (cluster.succeeded()) {
vertx = cluster.result();
log.info("Clustered Vert.x: {}", vertx);
} else {
log.warn("Exception", cluster.cause());
vertx = Vertx.vertx();
log.info("None Clustered Vert.x: {}", vertx);
}
...
adding the dependencies on hazelcast in the feature.xml didn't work for me and karaf does not start...
<bundle start-level="80">wrap:mvn:io.vertx/vertx-hazelcast/${vertx.version}</bundle>
<bundle start-level="80">wrap:mvn:io.vertx/vertx-service-discovery-bridge-kubernetes/${vertx.version}</bundle>
and event tried adding
<bundle start-level="80">wrap:mvn:com.hazelcast/hazelcast-all/${hazelcast.version}</bundle>-
it complains about missing dependencies on transaction.xa
org.osgi.service.resolver.ResolutionException: Unable to resolve root: missing requirement [root] osgi.identity; osgi.identity=vertx.core; type=karaf.feature; version="[1.0.0,1.0.0]";
filter:="(&(osgi.identity=vertx.core)(type=karaf.feature)(version>=1.0.0)(version<=1.0.0))" [caused by: Unable to resolve vertx.core/1.0.0: missing requirement [vertx.core/1.0.0] osgi.identity; osgi.identity=com.hazelcast; type=osgi.bundle; version="[3.6.3,3.6.3]"; resolution:=mandatory [caused by: Unable to resolv
e com.hazelcast/3.6.3: missing requirement [com.hazelcast/3.6.3] osgi.wiring.package; filter:="(osgi.wiring.package=javax.transaction.xa)"]]
at org.apache.felix.resolver.ResolutionError.toException(ResolutionError.java:42)[6:org.apache.karaf.features.core:4.0.8]
First of all I'd try to enhance the vert.x hazelcast stuff to be OSGi compliant, next we could use the Hazelcast Bundles, which Karaf Cellar is using for the cluster management.
https://github.com/vert-x3/vertx-hazelcast/pull/56 takes care of the vert.x hazelcast bundle
Regarding the Hazelcast Bundles. Karaf Cellar defines a feauture for that: https://github.com/apache/karaf-cellar/blob/master/assembly/src/main/resources/features.xml#L29-L34
each microservice instance will need a separate hazelcast instance or is this just the client? can we use a docker container like this one? -> https://hub.docker.com/r/hazelcast/hazelcast/
I am trying to make karaf run inside a docker container but it is looking for maven repo to run, I couldn't find an easy way to define an offline karaf assembly so far. I think it is best to have everything prepackaged. http://karaf.922171.n3.nabble.com/karaf-custom-assembly-for-offline-docker-td4049524.html
I doubt, that this docker image will be much of a help here, we'll need hazelcast from inside Karaf, therefore nope. Don't think that'll work.
To have a Karaf container run inside a Docker Container without internet access, isn't that hard. Follow the instructions here: http://karaf.apache.org/manual/latest/#_maven_assembly or take a look at the custom Karaf distribution here: https://github.com/ANierbeck/Karaf-Vertx/blob/master/Vertx-Karaf/pom.xml#L168-L181
For docker, you might also be interested in this: https://github.com/ANierbeck/Karaf-Microservices-Tooling/blob/master/Karaf-Service-Docker/pom.xml Though that sample uses the local maven repository, I don't we need that with a good customized Karaf. In that sample it is needed to access the local repository for dynamic installation of further bundles.
I see, each node needs its own hazelcast member/client and together they somehow connect and form a cluster. from the Vert.x docs the cluster is only used for node discovery and you should not use a hazelcast client.
Waiting to see your cluster code :) . you can close this issue now if you like.
Thanks, I followed the instructions, I had another issues with assembly.xml and now it is all working on docker as expected.
Hazelcast usually finds the other instances by multicast, but this won't work with docker right away ... in that case one needs to make sure all docker images know of each other and do "share" the network in a certain way. With AWS it's even worse, Multicast doesn't work there, but can be configured within the hazelcast configuration to use another mechanism
what about kubernetes? I see fabric8 claim they have vertx cluster working on multi-node. https://github.com/vert-x3/vertx-examples/tree/master/docker-examples/vertx-docker-example-fabric8
I think first step is to build a single host cluster and see :)
I guess they are focusing on using standalone Vert.x applications, without OSGi and without Karaf.
I made some progress over the weekend following your cellar link.
I got two karaf docker instances to connect to a cluster. I used the hazelcast manager as a cluster manager instead of messing with the ports. I think this is the right approach for clustering. do you agree?
my docker-compose looks like this ->
## Hazelcast cluster tests
---
version: '2'
services:
node:
image: 'hazelcast/hazelcast:latest'
ports:
- '5701'
restart: always
links:
- management
management:
image: 'hazelcast/management-center:latest'
ports:
- '8080:8080'
I replaced the node image with my docker image (karaf) and they connect.
I also used a slightly different feature, I want to use the vertx hazelcast config and not the cellar one so I have this feature:
<feature name="io.effectus.cluster.hazelcast" description="In memory data grid" version="${project.version}">
<bundle>mvn:org.apache.geronimo.specs/geronimo-jta_1.1_spec/1.1.1</bundle>
<bundle>mvn:com.eclipsesource.minimal-json/minimal-json/0.9.2</bundle>
<bundle start-level="80">mvn:com.hazelcast/hazelcast/${hazelcast.version}</bundle>
<bundle start-level="80">wrap:mvn:io.vertx/vertx-hazelcast/${vertx.version}</bundle>
</feature>
But now when I moved to the pax-jdbc datasource I am getting all sorts of errors ... to be concluded :)
I am getting serialization issues on the hazelcast service ID
com.hazelcast.nio.serialization.HazelcastSerializationException: Problem while reading DataSerializable, namespace: 0, id: 0, class: 'io.vertx.spi.cluster.hazelcast.impl.HazelcastServerID', exception: io.vertx.spi.cluster.hazelcast.impl.
HazelcastServerID
looking forward to your findings :)
puh ... as I have a lot on my todo list right now, this will need to wait a bit ... sorry :)
dont worry, I solved it with a custom hazelcast instance and updating the classloader so hazelcast can see vertx
Config hazelcastConfig = new Config("effectus-instance")
.setClassLoader(HazelcastClusterManager.class.getClassLoader())
I now have it all working on docker :) The kubernetes will have to wait... It doesnt play nice on windows so far
cool :)
The samples project now uses cellar and hazelcast as cluster manager with Karaf, all is wrapped up in a docker image: https://github.com/ANierbeck/Karaf-Vertx/tree/master/Vertx-Microservices/Vertx-Microservices-Cluster-Docker
Thanks for the great code example, it is very helpful.
have you thought about the next step in the microservices direction? what I mean is packaging each verticle in a separate docker container and connecting them together as a cluster with vertx bus? In your example it can be the JDBC and the bookservice as separate docker instances connecting via bus.
I was thinking it should be easy to add hazelcast cluster manager to the assembly but I am guessing it needs a separate docker container for the hazelcast service to master the cluster and then each verticle to see the master.
Thanks.