Ranger is a high level service discovery framework built on Zookeeper. The framework brings the following to the table:
As request rates increase, load balancers, even the very expensive ones, become bottlenecks. We needed to move beyond and be able to talk to services without having to channel all traffic through a load-balancer. There is obviously curator discovery; but as much as we love curator, we needed more features on top of it. As such we built this library to handle app level sharding and healtchecks. Btw, it still uses curator for low level ZK interactions.
Ranger provides two types of discovery out of the box:
Clone the source:
git clone github.com/appform-io/ranger
Build
mvn install
Use the following maven dependency:
<dependency>
<groupId>io.appform.ranger</groupId>
<artifactId>ranger</artifactId>
<versio>1.0-RC12</version>
</dependency>
There are service providers and service clients. We will look at the interactions from both sides.
Service providers register to the Ranger system by building and starting a ServiceProvider instance. During registering itself the provider must provide the following:
A node will be marked unhealthy iff:
This is very simple. Use the following boilerplate code.
ServiceProvider<UnshardedClusterInfo> serviceProvider
= ServiceProviderBuilders.unshardedServiceProviderBuilder()
.withConnectionString("localhost:2181") //Zookeeper host string
.withNamespace("test") //Service namespace
.withServiceName("test-service") //Service name
.withSerializer(new Serializer<UnshardedClusterInfo>() { //Serializer for info
@Override
public byte[] serialize(ServiceNode<UnshardedClusterInfo> data) {
try {
return objectMapper.writeValueAsBytes(data);
} catch (JsonProcessingException e) {
e.printStackTrace();
}
return null;
}
})
.withHostname(host) //Service hostname
.withPort(port) //Service port
.withHealthcheck(new Healthcheck() { //Healthcheck implementation.
@Override
public HealthcheckStatus check() {
return HealthcheckStatus.healthy; // OOR stuff should be put here
}
})
.withIsolatedHealthMonitor(new RotationStatusMonitor(TimeEntity.everySecond(), "/var/rotation.html"))
.buildServiceDiscovery();
serviceProvider.start(); //Start the instance
Stop the provider once you are done. (Generally this is when process ends)
serviceProvider.stop()
Let's assume that the following is your shard info class:
private static final class TestShardInfo {
private int shardId;
public TestShardInfo(int shardId) {
this.shardId = shardId;
}
public TestShardInfo() {
}
public int getShardId() {
return shardId;
}
public void setShardId(int shardId) {
this.shardId = shardId;
}
@Override
public boolean equals(Object o) {
if (this == o) return true;
if (o == null || getClass() != o.getClass()) return false;
TestShardInfo that = (TestShardInfo) o;
if (shardId != that.shardId) return false;
return true;
}
@Override
public int hashCode() {
return shardId;
}
}
To register a service provider node with this shard info, we can use the following code:
final ServiceProvider<TestShardInfo> serviceProvider
= ServiceProviderBuilders.<TestShardInfo>shardedServiceProviderBuilder()
.withConnectionString("localhost:2181")
.withNamespace("test")
.withServiceName("test-service")
.withSerializer(new Serializer<TestShardInfo>() {
@Override
public byte[] serialize(ServiceNode<TestShardInfo> data) {
try {
return objectMapper.writeValueAsBytes(data);
} catch (JsonProcessingException e) {
e.printStackTrace();
}
return null;
}
})
.withHostname(host)
.withPort(port)
.withNodeData(new TestShardInfo(shardId)) //Set the shard info for this shard
.withHealthcheck(new Healthcheck() {
@Override
public HealthcheckStatus check() {
return HealthcheckStatus.healthy;
}
})
.buildServiceDiscovery();
serviceProvider.start();
Stop the provider once you are done. (Generally this is when process ends)
serviceProvider.stop()
In a distributed architecture, taking care of thousands of servers is a difficult task. Failures are bound to happen, and individual services could always face issues. It becomes very important that we automate handling such failures. Ranger allows you to do that, for your ServiceProviders.
As mentioned earlier, the health state of any ServiceProvider is determined by a set of health monitors which are continuously running in the Service Provider. All monitors (and at least 1) need to be registered while building the ServiceProvider.
You may register any kind of Monitor, which could be monitoring any serivce/system level metric. For example, you could have monitors:
If any of the above are breached, the service will automatically be marked as unhealthy.
.withIsolatedHealthMonitor(new PingCheckMonitor(new TimeEntity(2, TimeUnit.SECONDS), httpRequest, 5000, 5, 3, "google.com", 80)); // put in the url here
..withIsolatedHealthMonitor(new RotationStatusMonitor(TimeEntity.everySecond(), "/var/rotation.html")); // path of file to be checked
At regular intervals, all of the above monitors will be aggregated into a single Health state of the service, which
For service discovery, a ServiceFinder object needs to be built and used.
Depending on whether you are looking to access a sharded service or an unsharded service, the code will differ a little.
First build and start the finder.
UnshardedClusterFinder serviceFinder
= ServiceFinderBuilders.unshardedFinderBuilder()
.withConnectionString("localhost:2181")
.withNamespace("test")
.withServiceName("test-service")
.withDeserializer(new Deserializer<UnshardedClusterInfo>() {
@Override
public ServiceNode<UnshardedClusterInfo> deserialize(byte[] data) {
try {
return objectMapper.readValue(data,
new TypeReference<ServiceNode<UnshardedClusterInfo>>() {
});
} catch (IOException e) {
e.printStackTrace();
}
return null;
}
})
.build();
serviceFinder.start();
To find an instance:
ServiceNode node = serviceFinder.get(null); //null because you don't need to pass any shard info
//User node.hetHost() and node.getPort()
Stop the finder once you are done. (Generally this is when process ends)
serviceFinder.stop()
This is similar to the above but for the type parameter you are using everywhere.
SimpleShardedServiceFinder<TestShardInfo> serviceFinder
= ServiceFinderBuilders.<TestShardInfo>shardedFinderBuilder()
.withConnectionString(testingCluster.getConnectString())
.withNamespace("test")
.withServiceName("test-service")
.withDeserializer(new Deserializer<TestShardInfo>() {
@Override
public ServiceNode<TestShardInfo> deserialize(byte[] data) {
try {
return objectMapper.readValue(data,
new TypeReference<ServiceNode<TestShardInfo>>() {
});
} catch (IOException e) {
e.printStackTrace();
}
return null;
}
})
.build();
serviceFinder.start();
Now you can find the service:
ServiceNode<TestShardInfo> node = serviceFinder.get(new TestShardInfo(1));
//Use host, port etc from the node
Stop the finder once you are done. (Generally this is when process ends)
serviceFinder.stop()
A service finder hub contains a collection of the above-mentioned service finders. A hub also makes creation of serviceFinders easy, for a service that is dependent on multiple other services, don't have to now create multiple serviceFinders, instead create a hub, with the set of services and the service finders get created automatically.
Could either be an http hub or a ZK hub. (Can add any hub in the future)
Hub clients for both ZK and http have been provided to initialize the same. A sample http hub client would look like the following.
RangerHubClient<TestShardInfo> hubClient = UnshardedRangerZKHubClient.<TestShardInfo>builder()
.namespace(rangerConfiguration.getNamespace())
.connectionString(rangerConfiguration.getZookeeper())
.curatorFramework(curatorFramework)
.disablePushUpdaters(rangerConfiguration.isDisablePushUpdaters())
.mapper(getMapper())
.refreshTimeMs(rangerConfiguration.getNodeRefreshTimeMs())
.deserializer(data -> {
try {
return getMapper().readValue(data, new TypeReference<ServiceNode<TestShardInfo>>() {
});
} catch (IOException e) {
log.warn("Error parsing node data with value {}", new String(data));
}
return null;
})
.services(Sets.newHashset("service1", "service2")) //Set of services here.
.build()
hubClient.start();
Now you can find the service:
ServiceNode<TestShardInfo> node = hubClient.get(new TestShardInfo(1));
//Use host, port etc from the node
Stop the hub client once you are done. (Generally this is when process ends)
hubClient.stop()
If you are using a dropwizard project, you could use the service discovery bundle directly instead of having to create your own service provider clients and bind them.
<dependency>
<groupId>io.appform.ranger</groupId>
<artifactId>ranger-discovery-bundle</artifactId>
<version>${ranger.version}</version>
</dependency>
You need to add an instance of type ServiceDiscoveryConfiguration to your Dropwizard configuration file as follows:
public class AppConfiguration extends Configuration {
//Your normal config
@NotNull
@Valid
private ServiceDiscoveryConfiguration discovery = new ServiceDiscoveryConfiguration();
//Whatever...
public ServiceDiscoveryConfiguration getDiscovery() {
return discovery;
}
}
Next, you need to use this configuration in the Application while registering the bundle.
public class App extends Application<AppConfig> {
private ServiceDiscoveryBundle<AppConfig> bundle;
@Override
public void initialize(Bootstrap<AppConfig> bootstrap) {
bundle = new ServiceDiscoveryBundle<AppConfig>() {
@Override
protected ServiceDiscoveryConfiguration getRangerConfiguration(AppConfig appConfig) {
return appConfig.getDiscovery();
}
@Override
protected String getServiceName(AppConfig appConfig) {
//Read from some config or hardcode your service name
//This will be used by clients to lookup instances for the service
return "some-service";
}
@Override
protected int getPort(AppConfig appConfig) {
return 8080; //Parse config or hardcode
}
@Override
protected NodeInfoResolver createNodeInfoResolver(){
return new DefaultNodeInfoResolver();
}
};
bootstrap.addBundle(bundle);
}
@Override
public void run(AppConfig configuration, Environment environment) throws Exception {
....
//Register health checks
bundle.registerHealthcheck(() -> {
//Check whatever
return HealthcheckStatus.healthy;
});
...
}
}
That's it .. your service will register to zookeeper when it starts up.
server:
...
discovery:
namespace: mycompany
environment: production
zookeeper: "zk-server1.mycompany.net:2181,zk-server2.mycompany.net:2181"
...
...
The bundle also adds a jersey resource that lets you inspect the available instances. Use GET /instances to see all instances that have been registered to your service.
If you are using a dropwizard project, you could use the service discovery bundle directly instead of having to create your own service provider clients and bind them.
<dependency>
<groupId>io.appform.ranger</groupId>
<artifactId>ranger-discovery-bundle</artifactId>
<version>${ranger.version}</version>
</dependency>
You need to add an instance of type ServiceDiscoveryConfiguration to your Dropwizard configuration file as follows:
public class AppConfiguration extends Configuration {
//Your normal config
@NotNull
@Valid
private ServiceDiscoveryConfiguration discovery = new ServiceDiscoveryConfiguration();
//Whatever...
public ServiceDiscoveryConfiguration getDiscovery() {
return discovery;
}
}
Next, you need to use this configuration in the Application while registering the bundle.
public class App extends Application<AppConfig> {
private ServiceDiscoveryBundle<AppConfig> bundle;
@Override
public void initialize(Bootstrap<AppConfig> bootstrap) {
bundle = new ServiceDiscoveryBundle<AppConfig>() {
@Override
protected ServiceDiscoveryConfiguration getRangerConfiguration(AppConfig appConfig) {
return appConfig.getDiscovery();
}
@Override
protected String getServiceName(AppConfig appConfig) {
//Read from some config or hardcode your service name
//This will be used by clients to lookup instances for the service
return "some-service";
}
@Override
protected int getPort(AppConfig appConfig) {
return 8080; //Parse config or hardcode
}
@Override
protected NodeInfoResolver createNodeInfoResolver(){
return new DefaultNodeInfoResolver();
}
};
bootstrap.addBundle(bundle);
}
@Override
public void run(AppConfig configuration, Environment environment) throws Exception {
....
//Register health checks
bundle.registerHealthcheck(() -> {
//Check whatever
return HealthcheckStatus.healthy;
});
...
}
}
That's it .. your service will register to zookeeper when it starts up.
server:
...
discovery:
namespace: mycompany
environment: production
zookeeper: "zk-server1.mycompany.net:2181,zk-server2.mycompany.net:2181"
...
...
The bundle also adds a jersey resource that lets you inspect the available instances. Use GET /instances to see all instances that have been registered to your service.
The earlier ranger's service finder construct operated on zookeeper as the datasource, the server has been introduced to support http data sources and to be able to provide a serviceFinder interface atop multiple data sources. Eg: you could have one server running atop zk, one atop http - and can deploy another http server fronting them both. Particularly useful when you have to aggregate amongst multiple service registries. A server bundle is provided to start a quick server (atop dropwizard)
To use the http server bundle along with boostrap use.
bootstrap.add(new RangerServerBundle<ShardInfo, AppConfiguration> {
@Override
protected List<RangerHubClient<ShardInfo>> withHubs(AppConfiguration configuration) {
val rangerConfiguration = configuration.getRangerConfiguration();
return rangerConfiguration.getHttpClientConfigs().stream().map(clientConfig -> UnshardedRangerHttpHubClient.<ShardInfo>builder()
.namespace(rangerConfiguration.getNamespace())
.mapper(getMapper())
.clientConfig(clientConfig)
.nodeRefreshIntervalMs(rangerConfiguration.getNodeRefreshTimeMs())
.deserializer(data -> {
try {
return getMapper().readValue(data, new TypeReference<ServiceNodesResponse<ShardInfo>>() {
});
} catch (IOException e) {
log.warn("Error parsing node data with value {}", new String(data));
}
return null;
})
.build()).toList();
}
@Override
protected boolean withInitialRotationStatus(AppConfiguration configuration) {
return configuration.isInitialRotationStatus();
}
@Override
protected List<HealthCheck> withHealthChecks(AppConfiguration configuration) {
return ImmutableList.of(new RangerHttpHealthCheck());
}
});
It comes with a rangerResource that provides you with interfaces for getting the list of services across hubs and the nodes per service across hubs.
Ranger uses Apache Curator:
For bugs, questions and discussions please use the Github Issues.
If you would like to contribute code you can do so through GitHub by forking the repository and sending a pull request.
This repo is a fork of: Ranger