oleksiyk / kafka

Apache Kafka 0.9 client for Node
MIT License
297 stars 84 forks source link

Build Status Test Coverage david Dependencies david Dev Dependencies license

no-kafka

no-kafka is Apache Kafka 0.9 client for Node.js with new unified consumer API support.

Supports sync and async Gzip and Snappy compression, producer batching and controllable retries, offers few predefined group assignment strategies and producer partitioner option.

All methods will return a promise

Please check a CHANGELOG for backward incompatible changes in version 3.x

Using

kafka-topics.sh --zookeeper 127.0.0.1:2181 --create --topic kafka-test-topic --partitions 3 --replication-factor 1
npm install no-kafka

Producer

Example:

var Kafka = require('no-kafka');
var producer = new Kafka.Producer();

return producer.init().then(function(){
  return producer.send({
      topic: 'kafka-test-topic',
      partition: 0,
      message: {
          value: 'Hello!'
      }
  });
})
.then(function (result) {
  /*
  [ { topic: 'kafka-test-topic', partition: 0, offset: 353 } ]
  */
});

Send and retry if failed within 100ms delay:

return producer.send(messages, {
  retries: {
    attempts: 2,
    delay: {
      min: 100,
      max: 300
    }
  }
});

Batching (grouping) produce requests

Accumulate messages into single batch until their total size is >= 1024 bytes or 100ms timeout expires (overwrite Producer constructor options):

producer.send(messages, {
  batch: {
    size: 1024,
    maxWait: 100
  }
});
producer.send(messages, {
  batch: {
    size: 1024,
    maxWait: 100
  }
});

Please note, that if you pass different options to the send() method then these messages will be grouped into separate batches:

// will be sent in batch 1
producer.send(messages, {
  batch: {
    size: 1024,
    maxWait: 100
  },
  codec: Kafka.COMPRESSION_GZIP
});
// will be sent in batch 2
producer.send(messages, {
  batch: {
    size: 1024,
    maxWait: 100
  },
  codec: Kafka.COMPRESSION_SNAPPY
});

Keyed Messages

Send a message with the key:

producer.send({
    topic: 'kafka-test-topic',
    partition: 0,
    message: {
        key: 'some-key'
        value: 'Hello!'
    }
});

Custom Partitioner

Example: override the default partitioner with a custom partitioner that only uses a portion of the key.

var util  = require('util');
var Kafka = require('no-kafka');

var Producer           = Kafka.Producer;
var DefaultPartitioner = Kafka.DefaultPartitioner;

function MyPartitioner() {
    DefaultPartitioner.apply(this, arguments);
}

util.inherits(MyPartitioner, DefaultPartitioner);

MyPartitioner.prototype.getKey = function getKey(message) {
    return message.key.split('-')[0];
};

var producer = new Producer({
    partitioner : new MyPartitioner()
});

return producer.init().then(function(){
  return producer.send({
      topic: 'kafka-test-topic',
      message: {
          key   : 'namespace-key',
          value : 'Hello!'
      }
  });
});

Producer options:

SimpleConsumer

Manually specify topic, partition and offset when subscribing. Suitable for simple use cases.

Example:

var consumer = new Kafka.SimpleConsumer();

// data handler function can return a Promise
var dataHandler = function (messageSet, topic, partition) {
    messageSet.forEach(function (m) {
        console.log(topic, partition, m.offset, m.message.value.toString('utf8'));
    });
};

return consumer.init().then(function () {
    // Subscribe partitons 0 and 1 in a topic:
    return consumer.subscribe('kafka-test-topic', [0, 1], dataHandler);
});

Subscribe (or change subscription) to specific offset and limit maximum received MessageSet size:

consumer.subscribe('kafka-test-topic', 0, {offset: 20, maxBytes: 30}, dataHandler)

Subscribe to latest or earliest offsets in the topic/parition:

consumer.subscribe('kafka-test-topic', 0, {time: Kafka.LATEST_OFFSET}, dataHandler)
consumer.subscribe('kafka-test-topic', 0, {time: Kafka.EARLIEST_OFFSET}, dataHandler)

Subscribe to all partitions in a topic:

consumer.subscribe('kafka-test-topic', dataHandler)

Commit offset(s) (V0, Kafka saves these commits to Zookeeper)

consumer.commitOffset([
  {
      topic: 'kafka-test-topic',
      partition: 0,
      offset: 1
  },
  {
      topic: 'kafka-test-topic',
      partition: 1,
      offset: 2
  }
])

Fetch commited offset(s)

consumer.fetchOffset([
  {
      topic: 'kafka-test-topic',
      partition: 0
  },
  {
      topic: 'kafka-test-topic',
      partition: 1
  }
]).then(function (result) {
/*
[ { topic: 'kafka-test-topic',
    partition: 1,
    offset: 2,
    metadata: null,
    error: null },
  { topic: 'kafka-test-topic',
    partition: 0,
    offset: 1,
    metadata: null,
    error: null } ]
*/
});

SimpleConsumer options

GroupConsumer (new unified consumer API)

Specify an assignment strategy (or use no-kafka built-in consistent or round robin assignment strategy) and subscribe by specifying only topics. Elected group leader will automatically assign partitions between all group members.

Example:

var Promise = require('bluebird');
var consumer = new Kafka.GroupConsumer();

var dataHandler = function (messageSet, topic, partition) {
    return Promise.each(messageSet, function (m){
        console.log(topic, partition, m.offset, m.message.value.toString('utf8'));
        // commit offset
        return consumer.commitOffset({topic: topic, partition: partition, offset: m.offset, metadata: 'optional'});
    });
};

var strategies = [{
    subscriptions: ['kafka-test-topic'],
    handler: dataHandler
}];

consumer.init(strategies); // all done, now wait for messages in dataHandler

Assignment strategies

no-kafka provides three built-in strategies:

Using Kafka.WeightedRoundRobinAssignmentStrategy:

var strategies = {
    subscriptions: ['kafka-test-topic'],
    metadata: {
        weight: 4
    },
    strategy: new Kafka.WeightedRoundRobinAssignmentStrategy(),
    handler: dataHandler
};
// consumer.init(strategies)....

Using Kafka.ConsistentAssignmentStrategy:

var strategies = {
    subscriptions: ['kafka-test-topic'],
    metadata: {
        id: process.argv[2] || 'consumer_1',
        weight: 50
    },
    strategy: new Kafka.ConsistentAssignmentStrategy(),
    handler: dataHandler
};
// consumer.init(strategies)....

Note that each consumer in a group should have its own and consistent metadata.id.

You can also write your own assignment strategy by inheriting from Kafka.DefaultAssignmentStrategy and overwriting assignment method.

GroupConsumer options

GroupAdmin (consumer groups API)

Offers methods:

listGroups, describeGroup:

var admin = new Kafka.GroupAdmin();

return admin.init().then(function(){
    return admin.listGroups().then(function(groups){
        // [ { groupId: 'no-kafka-admin-test-group', protocolType: 'consumer' } ]
        return admin.describeGroup('no-kafka-admin-test-group').then(function(group){
            /*
            { error: null,
              groupId: 'no-kafka-admin-test-group',
              state: 'Stable',
              protocolType: 'consumer',
              protocol: 'DefaultAssignmentStrategy',
              members:
               [ { memberId: 'group-consumer-82646843-b4b8-4e91-94c9-b4708c8b05e8',
                   clientId: 'group-consumer',
                   clientHost: '/192.168.1.4',
                   version: 0,
                   subscriptions: [ 'kafka-test-topic'],
                   metadata: <Buffer 63 6f 6e 73 75 6d 65 72 2d 6d 65 74 61 64 61 74 61>,
                   memberAssignment:
                    { _blength: 44,
                      version: 0,
                      partitionAssignment:
                       [ { topic: 'kafka-test-topic',
                           partitions: [ 0, 1, 2 ] },
                          ],
                      metadata: null } },
                  ] }
             */
        })
    });
});

fetchConsumerLag:

var admin = new Kafka.GroupAdmin();

return admin.init().then(function(){
    return admin.fetchConsumerLag('no-kafka-admin-test-group', [{
        topicName: 'kafka-test-topic',
        partitions: [0, 1, 2]
    }]).then(function (consumerLag) {
        /*
        [ { topic: 'kafka-test-topic',
            partition: 0,
            offset: 11300,
            highwaterMark: 11318,
            consumerLag: 18 },
          { topic: 'kafka-test-topic',
            partition: 1,
            offset: 10380,
            highwaterMark: 10380,
            consumerLag: 0 },
          { topic: 'kafka-test-topic',
            partition: 2,
            offset: -1,
            highwaterMark: 10435,
            consumerLag: null } ]
         */
    });
});

Note that group consumer has to commit offsets first, in order for consumerLag to be available. Otherwise the offset will be set to -1.

Compression

no-kafka supports both SNAPPY and Gzip compression. To use SNAPPY you must install the snappy NPM module in your project.

Enable compression in Producer:

var Kafka = require('no-kafka');

var producer = new Kafka.Producer({
    clientId: 'producer',
    codec: Kafka.COMPRESSION_SNAPPY // Kafka.COMPRESSION_NONE, Kafka.COMPRESSION_SNAPPY, Kafka.COMPRESSION_GZIP
});

Alternatively just send some messages with specified compression codec (overwrites codec set in contructor):

return producer.send({
    topic: 'kafka-test-topic',
    partition: 0,
    message: { value: 'p00' }
}, { codec: Kafka.COMPRESSION_SNAPPY })

By default no-kafka will use asynchronous compression and decompression. Disable async compression/decompression (and use sync) with asyncCompression option (synchronous Gzip is not availble in node < 0.11):

Producer:

var producer = new Kafka.Producer({
    clientId: 'producer',
    asyncCompression: false, // use sync compression/decompression
    codec: Kafka.COMPRESSION_SNAPPY
});

Consumer:

var consumer = new Kafka.SimpleConsumer({
    idleTimeout: 100,
    clientId: 'simple-consumer',
    asyncCompression: true
});

Connection

Initial Brokers

no-kafka will connect to the hosts specified in connectionString constructor option unless it is omitted. In this case it will use KAFKA_URL environment variable or fallback to default kafka://127.0.0.1:9092. For better availability always specify several initial brokers: 10.0.1.1:9092,10.0.1.2:9092,10.0.1.3:9092. The / prefix is optional.

Disconnect / Timeout Handling

All network errors are handled by the library: producer will retry sending failed messages for configured amount of times, simple consumer and group consumer will try to reconnect to failed host, update metadata as needed as so on.

SSL

To connect to Kafka with SSL endpoint enabled specify SSL certificate and key options to load cert/key from files or provide certificate/key directly as strings:

Loading certificate and key from file:

var producer = new Kafka.Producer({
  connectionString: 'kafka://127.0.0.1:9093', // should match `listeners` SSL option in Kafka config
  ssl: {
    cert: '/path/to/client.crt',
    key: '/path/to/client.key'
  }
});

Specifying certificate and key directly as strings:

var producer = new Kafka.Producer({
  connectionString: 'kafka://127.0.0.1:9093', // should match `listeners` SSL option in Kafka config
  ssl: {
    cert: '-----BEGIN CERTIFICATE-----\nMIIChTCCAe4C...............',
    key: '-----BEGIN RSA PRIVATE KEY-----\nMIIEowIBA.......'
  }
});

Other Node.js SSL options are available such as rejectUnauthorized, secureProtocol, ciphers, etc. See Node.js tls.createServer method documentation for more details.

It is also possible to use KAFKA_CLIENT_CERT and KAFKA_CLIENT_CERT_KEY environment variables to specify SSL certificate and key:

KAFKA_URL=kafka://127.0.0.1:9093 KAFKA_CLIENT_CERT=./test/ssl/client.crt KAFKA_CLIENT_CERT_KEY=./test/ssl/client.key node producer.js

Or as text strings:

KAFKA_URL=kafka://127.0.0.1:9093 KAFKA_CLIENT_CERT=`cat ./test/ssl/client.crt` KAFKA_CLIENT_CERT_KEY=`cat ./test/ssl/client.key` node producer.js

Using a self signed certificate:

Kafka.Producer({
  connectionString: 'kafka://127.0.0.1:9093', // should match `listeners` SSL option in Kafka config
  ssl: {
    ca: '/path/to/my-cert.crt' // or fs.readFileSync('my-cert.crt')
  }
});

It is also possible to use KAFKA_CLIENT_CA environment variable to specify a self signed SSL certificate:

KAFKA_URL=kafka://127.0.0.1:9093 KAFKA_CLIENT_CA=./test/ssl/my-cert.crt node producer.js

Remapping Broker Addresses

Sometimes the advertised listener addresses for a Kafka cluster may be incorrect from the client, such as when a Kafka farm is behind NAT or other network infrastructure. In this scenario it is possible to pass a brokerRedirection option to the Producer, SimpleConsumer or GroupConsumer.

The value of the brokerDirection can be either:

A common scenario for this kind of remapping is when a Kafka cluster exists within a Docker application, and the internally advertised names needed for container to container communication do not correspond to the actual external ports or addresses when connecting externally via other tools.

Reconnection delay

In case of network error which prevents further operations no-kafka will try to reconnect to Kafka brokers in a endless loop with the optionally progressive delay which can be configured with reconnectionDelay option.

Logging

You can differentiate messages from several instances of producer/consumer by providing unique clientId in options:

var consumer1 = new Kafka.GroupConsumer({
    clientId: 'group-consumer-1'
});
var consumer2 = new Kafka.GroupConsumer({
    clientId: 'group-consumer-2'
});

=>

2016-01-12T07:41:57.884Z INFO group-consumer-1 ....
2016-01-12T07:41:57.884Z INFO group-consumer-2 ....

Change the logging level:

var consumer = new Kafka.GroupConsumer({
    clientId: 'group-consumer',
    logger: {
        logLevel: 1 // 0 - nothing, 1 - just errors, 2 - +warnings, 3 - +info, 4 - +debug, 5 - +trace
    }
});

Send log messages to Logstash server(s) via UDP:

var consumer = new Kafka.GroupConsumer({
    clientId: 'group-consumer',
    logger: {
        logstash: {
            enabled: true,
            connectionString: '10.0.1.1:9999,10.0.1.2:9999',
            app: 'myApp-kafka-consumer'
        }
    }
});

You can overwrite the function that outputs messages to stdout/stderr:

var consumer = new Kafka.GroupConsumer({
    clientId: 'group-consumer',
    logger: {
        logFunction: console.log
    }
});

Topic Creation

There is no Kafka API call to create a topic. Kafka supports auto creating of topics when their metadata is first requested (auto.create.topic option) but the topic is created with all default parameters, which is useless. There is no way to be notified when the topic has been created, so the library will need to ping the server with some interval. There is also no way to be notified of any error for this operation. For this reason, having no guarantees, no-kafka won't provide topic creation method until there will be a specific Kafka API call to create/manage topics.

License: MIT