vert-x3 / vertx-hazelcast

Hazelcast Cluster Manager for Vert.x
Apache License 2.0
78 stars 75 forks source link

Vert.x on Kubernetes with Hazelcast clustering not working #113

Closed Swatikp closed 5 years ago

Swatikp commented 5 years ago

Hi I am trying to deploy Vert.x on Kubernetes with hazelcast clustering. I followed the same steps mentioned here:

https://vertx.io/docs/vertx-hazelcast/java/#_configuring_for_kubernetes

I have 2 verticles and they both are able to discover themselves on Kubernetes as shown below: members-discovery-successful

But when one verticle tries to send message to another over eventBus it uses localhost, and connection cannot be established: connection-refused-eventbus

I have done the setup by following steps:

  1. Following are the dependencies that I added in the verticles:

    
    <dependencies>    
    <dependency>
                        <groupId>io.vertx</groupId>
                        <artifactId>vertx-hazelcast</artifactId>
                        <version>3.6.3</version>
                </dependency>
                <dependency>
                        <groupId>com.hazelcast</groupId>
                        <artifactId>hazelcast-kubernetes</artifactId>
                        <version>1.0.0</version>
                </dependency>
    
    </dependencies>

2. And my cluster.xml for both the vert.x verticles looks like this:

<?xml version="1.0" encoding="UTF-8"?>

<hazelcast xsi:schemaLocation="http://www.hazelcast.com/schema/config hazelcast-config-3.10.xsd" xmlns="http://www.hazelcast.com/schema/config" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">

true k8s service-hazelcast-server.default.svc.cluster.local


3. And After that I deployed one headless Kubernetes service with following Kubernetes configuration:

apiVersion: v1 kind: Service metadata: namespace: default name: service-hazelcast-server spec: selector: component: service-hazelcast-server clusterIP: None ports:


4. Once service is deployed, I am  passing component label as service name in Verticles k8 deployments as follows:

apiVersion: extensions/v1beta1 kind: Deployment metadata: namespace: default name: pod-hazelcast-client1 spec:

1 or 2 client pods

replicas: 1

replicas: 1 selector: matchLabels: app: pod-hazelcast-client1 template: metadata: labels: app: pod-hazelcast-client1 component: service-hazelcast-server spec: containers:

as-ajitsingh commented 5 years ago

I am also facing the same issue. I also tried setting up the cluster host (using new VertxOptions().setClusterHost()) in Java but still got no luck. I tried setting up this value to Host address of my hazel cast server but it is keep on saying Not able to bind, address already in use.

tsegismont commented 5 years ago

@Swatikp how is the Vert.x app started? It sounds like you created your own main instead of reusing the Launcher class . When you create your own main, you must set the cluster host option yourself (the launcher autodetects the node IP).

mpreddy77 commented 5 years ago

@Swatikp Did you resolve the issue? I'm using Launcher class and still ran into this issue. However in my case, i can't seem to get the hazelcast clustering working either, and ofcourse the eventbus clustering is the next stop. Followed like you, verbatum the vert.x docs and see the following log from ping verticle and pong verticle. I'm using hazelcast port 5701 and explicit event-bus port 5711

INFO: [10.230.249.225]:5701 [dev] [3.9.4] Kubernetes Discovery properties: { service-dns: hazelcast-headless-service.ngi-dev.svc.cluster.local, service-dns-timeout: 5, service-name: hazelcast-headless-service, service-port: 5701, service-label: component, service-label-value: hazelcast, namespace: ngi-dev, resolve-not-ready-addresses: false, kubernetes-master: https://kubernetes.default.svc} Apr 03, 2019 12:25:07 AM com.hazelcast.spi.discovery.integration.DiscoveryService INFO: [10.230.249.225]:5701 [dev] [3.9.4] Kubernetes Discovery activated resolver: DnsEndpointResolver Apr 03, 2019 12:25:07 AM com.hazelcast.instance.Node INFO: [10.230.249.225]:5701 [dev] [3.9.4] Activating Discovery SPI Joiner Apr 03, 2019 12:25:07 AM com.hazelcast.spi.impl.operationexecutor.impl.OperationExecutorImpl INFO: [10.230.249.225]:5701 [dev] [3.9.4] Starting 2 partition threads and 3 generic threads (1 dedicated for priority tasks) Apr 03, 2019 12:25:07 AM com.hazelcast.internal.diagnostics.Diagnostics INFO: [10.230.249.225]:5701 [dev] [3.9.4] Diagnostics disabled. To enable add -Dhazelcast.diagnostics.enabled=true to the JVM arguments. Apr 03, 2019 12:25:07 AM com.hazelcast.core.LifecycleService INFO: [10.230.249.225]:5701 [dev] [3.9.4] [10.230.249.225]:5701 is STARTING Apr 03, 2019 12:25:07 AM com.hazelcast.system INFO: [10.230.249.225]:5701 [dev] [3.9.4] Cluster version set to 3.9 Apr 03, 2019 12:25:07 AM com.hazelcast.internal.cluster.ClusterService INFO: [10.230.249.225]:5701 [dev] [3.9.4] Members {size:1, ver:1} [ Member [10.230.249.225]:5701 - 1def86ee-13c4-4dc9-958a-10d7ac76abed this ] Apr 03, 2019 12:25:07 AM com.hazelcast.core.LifecycleService INFO: [10.230.249.225]:5701 [dev] [3.9.4] [10.230.249.225]:5701 is STARTED Apr 03, 2019 12:25:08 AM com.hazelcast.internal.partition.impl.PartitionStateManager INFO: [10.230.249.225]:5701 [dev] [3.9.4] Initializing cluster partition table arrangement... 2019-04-03 00:25:08,324 1862 [main] DEBUG c.b.banking.ngi.common.Launcher - @afterStartingVertx, isClustered:true, isMetricsEnabled:false, eventBus: io.vertx.core.eventbus.impl.clustered.ClusteredEventBus@14a2f921 Apr 03, 2019 12:25:08 AM io.vertx.core.impl.launcher.commands.VertxIsolatedDeployer INFO: Succeeded in deploying verticle 2019-04-03 00:25:45,890 39428 [vert.x-eventloop-thread-0] DEBUG c.b.b.ngi.request.PingVerticle - Error: No reply(NO_HANDLERS,-1) No handlers for address pong

mpreddy77 commented 5 years ago

Never mind! My issue was with the network policy on kubernetes preventing pod discovery. It's all good!

tsegismont commented 5 years ago

@mpreddy77 thanks for letting us know!

Swatikp commented 5 years ago

Hi @tsegismont

The problem for me was that the verticle was trying to discover other verticle on localhost and since all of them were on different port, localhost does not make sense in that case. So the below approach worked for me.

Finding address where the verticle is running:

String address = InetAddress.getLocalHost().getHostAddress();

Configuring eventbus to set host as the Address that we found in previous step:

VertxOptions options = new VertxOptions().setClustered(true).setClusterManager(clusterManager).setEventBusOptions(new EventBusOptions().setPort(5711).setHost(address));

I hope this can help anyone else facing the same issue.

tsegismont commented 5 years ago

Thank you @mpreddy77