Aiven-Open / s3-connector-for-apache-kafka

Aiven's S3 Sink Connector for Apache Kafka®
Apache License 2.0
66 stars 27 forks source link
aws kafka kafka-connect s3

Aiven's S3 Sink Connector for Apache Kafka

[!IMPORTANT]
The Aiven S3 Connector for Apache Kafka development has been moved to https://github.com/Aiven-Open/commons-for-apache-kafka-connect/

Pull Request Workflow

This is a sink Apache Kafka Connect connector that stores Apache Kafka messages in an AWS S3 bucket.

Table of Contents

How it works

The connector subscribes to the specified Kafka topics and collects messages coming in them and periodically dumps the collected data to the specified bucket in AWS S3.

Requirements

The connector requires Java 11 or newer for development and production.

Authorization

The connector needs the following permissions to the specified bucket:

In case of Access Denied error, see https://aws.amazon.com/premiumsupport/knowledge-center/s3-troubleshoot-403/

Authentication

To make the connector work, a user has to specify AWS credentials that allow writing to S3. There are two ways to specify AWS credentials in this connector:

1) Long term credentials.

It requires both aws.access.key.id and aws.secret.access.key to be specified. 2) Short term credentials.

The connector will request a temporary token from the AWS STS service and assume a role from another AWS account. It requires aws.sts.role.arn, aws.sts.role.session.name to be specified. 3) Use default provider chain or custom provider

If you prefer to use AWS default provider chain, you can leave {`aws.access.key.id` and `aws.secret.access.key`} and
{`aws.sts.role.arn`, `aws.sts.role.session.name`} blank. In case you prefer to build your own custom
provider, pass the custom provider class as a parameter to `aws.credential.provider`

It is important not to use both 1 and 2 simultaneously. Using option 2, it is recommended to specify the S3 bucket region in aws.s3.region and the corresponding AWS STS endpoint in aws.sts.config.endpoint. It's better to specify both or none. It is also important to specify aws.sts.role.external.id for the security reason. (see some details here).

File name format

File name format is tightly related to Record Grouping

The connector uses the following format for output files (blobs): <prefix><filename>.

<prefix>is the optional prefix that can be used, for example, for subdirectories in the bucket. <filename> is the file name. The connector has the configurable template for file names.

Configuration property `file.name.template`. If not set, default template is used: `{{topic}}-{{partition}}-{{start_offset}}`

It supports placeholders with variable names: {{ variable_name }}. Currently, supported variables are:

To add zero padding to Kafka offsets, you need to add additional parameter padding in the start_offset variable, which value can be true or false (the default). For example: {{topic}}-{{partition}}-{{start_offset:padding=true}}.gz will produce file names like mytopic-1-00000000000000000001.gz.

To add zero padding to partition number, you need to add additional parameter padding in the partition variable, which value can be true or false (the default). For example: {{topic}}-{{partition:padding=true}}-{{start_offset}}.gz will produce file names like mytopic-0000000001-1.gz.

To add formatted timestamps, use timestamp variable.
For example: {{topic}}-{{partition}}-{{start_offset}}-{{timestamp:unit=yyyy}}{{timestamp:unit=MM}}{{timestamp:unit=dd}}.gz will produce file names like mytopic-2-1-20200301.gz.

To configure the time zone for the timestamp variable, use file.name.timestamp.timezone property. Please see the description of properties in the "Configuration" section.

Only the certain combinations of variables and parameters are allowed in the file name template (however, variables in a template can be in any order). Each combination determines the mode of record grouping the connector will use. Currently, supported combinations of variables and the corresponding record grouping modes are:

See record grouping in the next section for more details.

If the file name template is not specified, the default value is {{topic}}-{{partition}}-{{start_offset}} (+ .gz when compression is enabled).

Record grouping

Incoming records are being grouped until flushed. The connector flushes grouped records in one file per offset.flush.interval.ms setting for partitions that have received new messages during this period. The setting defaults to 60 seconds.

Record grouping, similar to Kafka topics, has 2 modes:

Modes are defined implicitly by the fields used of the file name template.

Grouping by the topic and partition

Mode: Changelog

In this mode, the connector groups records by the topic and partition. When a file is written, an offset of the first record in it is added to its name.

For example, let's say the template is {{topic}}-part{{partition}}-off{{start_offset}}. If the connector receives records like

topic:topicB partition:0 offset:0
topic:topicA partition:0 offset:0
topic:topicA partition:0 offset:1
topic:topicB partition:0 offset:1
flush

there will be two files topicA-part0-off0 and topicB-part0-off0 with two records in each.

Each flush produces a new set of files. For example:

topic:topicA partition:0 offset:0
topic:topicA partition:0 offset:1
flush
topic:topicA partition:0 offset:2
topic:topicA partition:0 offset:3
flush

In this case, there will be two files topicA-part0-off0 and topicA-part0-off2 with two records in each.

Grouping by the key

Mode: Compact

In this mode, the connector groups records by the Kafka key. It always puts one record in a file, the latest record that arrived before a flush for each key. Also, it overwrites files if later new records with the same keys arrive.

This mode is good for maintaining the latest values per key as files on GCS.

Let's say the template is k{{key}}. For example, when the following records arrive

key:0 value:0
key:1 value:1
key:0 value:2
key:1 value:3
flush

there will be two files k0 (containing value 2) and k1 (containing value 3).

After a flush, previously written files might be overwritten:

key:0 value:0
key:1 value:1
key:0 value:2
key:1 value:3
flush
key:0 value:4
flush

In this case, there will be two files k0 (containing value 4) and k1 (containing value 3).

The string representation of a key

The connector in this mode uses the following algorithm to create the string representation of a key:

  1. If key is null, the string value is "null" (i.e., string literal null).
  2. If key schema type is STRING, it's used directly.
  3. Otherwise, Java .toString() is applied.

If keys of you records are strings, you may want to use org.apache.kafka.connect.storage.StringConverter as key.converter.

Warning: Single key in different partitions

The group by key mode primarily targets scenarios where each key appears in one partition only. If the same key appears in multiple partitions, the result may be unexpected.

For example:

topic:topicA partition:0 key:x value:aaa
topic:topicA partition:1 key:x value:bbb
flush

file kx may contain aaa or bbb, i.e. the behavior is non-deterministic.

Data Format

Connector class name, in this case: io.aiven.kafka.connect.s3.AivenKafkaConnectS3SinkConnector.

S3 Object Names

S3 connector stores series of files in the specified bucket. Each object is named using pattern [<aws.s3.prefix>]<topic>-<partition>-<startoffset>[.gz] (see [#file-name-format](File name format section) for more patterns). The .gz extension is used if gzip compression is used, see file.compression.type below.

Data File Format

Output files are text files that contain one record per line (i.e., they're separated by \n) except PARQUET format.

There are four types of data format available:

The connector can output the following fields from records into the output: the key, the value, the timestamp, the offset and headers. (The set and the order of output: the key, the value, the timestamp, the offset and headers. The set of these output fields is configurable.) The field values are separated by comma.

CSV Format example

The key and the value—if they're output—are stored as binaries encoded in Base64.

For example, if we output key,value,offset,timestamp, a record line might look like:

a2V5,TG9yZW0gaXBzdW0gZG9sb3Igc2l0IGFtZXQ=,1232155,1554210895

It is possible to control the encoding of the value field by setting format.output.fields.value.encoding to base64 or none.

If the key, the value or the timestamp is null, an empty string will be output instead:

,,,1554210895

A comma separated list of fields to include in output. Supported values are: key, offset, timestamp, headers, and value. Defaults to value.

NB!

JSONL Format example

For example, if we output key,value,offset,timestamp, a record line might look like:

 { "key": "k1", "value": "v0", "offset": 1232155, "timestamp":"2020-01-01T00:00:01Z" }

OR

  { "key": "user1", "value": {"name": "John", "address": {"city": "London"}}, "offset": 1232155, "timestamp":"2020-01-01T00:00:01Z" }

It is recommended to use

as key.converter and/or value.converter to make output files human-readable.

NB!

JSON Format example

For example, if we output key,value,offset,timestamp, an output file might look like:

[
  { "key": "k1", "value": "v0", "offset": 1232155, "timestamp":"2020-01-01T00:00:01Z" }, 
  { "key": "k2", "value": "v1", "offset": 1232156, "timestamp":"2020-01-01T00:00:05Z" }
]

OR

[
  { "key": "user1", "value": {"name": "John", "address": {"city": "London"}}, "offset": 1232155, "timestamp":"2020-01-01T00:00:01Z" }
]

It is recommended to use

as key.converter and/or value.converter to make output files human-readable.

NB!

NB!

For both JSON and JSONL another example could be for a single field output e.g. value, a record line might look like:

{ "value": "v0" }

OR

{ "value": {"name": "John", "address": {"city": "London"}} }

In this case it sometimes make sense to get rid of additional JSON object wrapping the actual value using format.output.envelope. Having format.output.envelope=false can produce the following output:

"v0"

OR

{"name": "John", "address": {"city": "London"}}

Parquet format example

For example, if we output key,offset,timestamp,headers,value, an output Parquet schema might look like this:

{
    "type": "record", "fields": [
      {"name": "key", "type": "RecordKeySchema"},
      {"name": "offset", "type": "long"},
      {"name": "timestamp", "type": "long"},
      {"name": "headers", "type": "map"},
      {"name": "value", "type": "RecordValueSchema"}
  ]
}

where RecordKeySchema - a key schema and RecordValueSchema - a record value schema. This means that in case you have the record and key schema like:

Key schema:

{
  "type": "string"
}

Record schema:

{
    "type": "record", "fields": [
      {"name": "foo", "type": "string"},
      {"name": "bar", "type": "long"}
  ]
}

the final Avro schema for Parquet is:

{
    "type": "record", "fields": [
      {"name": "key", "type": "string"},
      {"name": "offset", "type": "long"},
      {"name": "timestamp", "type": "long"},
      {"name": "headers", "type": "map", "values": "long"},
      { "name": "value", 
        "type": "record", 
        "fields": [
          {"name": "foo", "type": "string"},
          {"name": "bar", "type": "long"}
        ]
      }
  ]
}

For a single-field output e.g. value, a record line might look like:

{ "value": {"name": "John", "address": {"city": "London"}} }

In this case it sometimes make sense to get rid of additional JSON object wrapping the actual value using format.output.envelope. Having format.output.envelope=false can produce the following output:

{"name": "John", "address": {"city": "London"}}

NB!

Usage

Connector Configuration

Important Note Since version 2.6 all existing configuration is deprecated and will be replaced with new one during a certain transition period (within 2-3 releases)

List of deprecated configuration parameters:

List of new configuration parameters:

Configuration

Here you can read about the Connect workers configuration and here, about the connector Configuration.

Here is an example connector configuration with descriptions:

### Standard connector configuration

## Fill in your values in these:

## These must have exactly these values:

# The Java class for the connector
connector.class=io.aiven.kafka.connect.s3.AivenKafkaConnectS3SinkConnector

# The key converter for this connector
key.converter=org.apache.kafka.connect.storage.StringConverter

# The value converter for this connector
value.converter=org.apache.kafka.connect.json.JsonConverter

# Identify, if value contains a schema.
# Required value converter is `org.apache.kafka.connect.json.JsonConverter`.
value.converter.schemas.enable=false

# The type of data format used to write data to the GCS output files.
# The supported values are: `csv`, `json`, `jsonl` and `parquet`.
# Optional, the default is `csv`.
format.output.type=jsonl

# A comma-separated list of topics to use as input for this connector
# Also a regular expression version `topics.regex` is supported.
# See https://kafka.apache.org/documentation/#connect_configuring
topics=topic1,topic2

### Connector-specific configuration
### Fill in you values
# AWS Access Key ID
aws.access.key.id=YOUR_AWS_KEY_ID

# AWS Access Secret Key
aws.secret.access.key=YOUR_AWS_SECRET_ACCESS_KEY

#AWS Region
aws.s3.region=us-east-1

#File name template
file.name.template=dir1/dir2/{{topic}}-{{partition:padding=true}}-{{start_offset:padding=true}}.gz

#The name of the S3 bucket to use
#Required.
aws.s3.bucket.name=my-bucket

# The set of the fields that are to be output, comma separated.
# Supported values are: `key`, `value`, `offset`, `timestamp` and `headers`.
# Optional, the default is `value`.
format.output.fields=key,value,offset,timestamp

# The option to enable/disable wrapping of plain values into additional JSON object(aka envelope)
# Optional, the default value is `true`.
format.output.envelope=true

# The compression type used for files put on GCS.
# The supported values are: `gzip`, `snappy`, `zstd`, `none`.
# Optional, the default is `none`.
file.compression.type=gzip

# The time zone in which timestamps are represented.
# Accepts short and long standard names like: `UTC`, `PST`, `ECT`,
# `Europe/Berlin`, `Europe/Helsinki`, or `America/New_York`. 
# For more information please refer to https://docs.oracle.com/javase/tutorial/datetime/iso/timezones.html.
# The default is `UTC`.
timestamp.timezone=Europe/Berlin

# The source of timestamps.
# Supports only `wallclock` which is the default value.
timestamp.source=wallclock

S3 multi-part uploads

To configure S3 multi-part uploads buffer size change:

Retry strategy configuration

There are four configuration properties to configure retry strategy exists.

Apache Kafka connect retry strategy configuration property

AWS S3 retry strategy configuration properties

AWS S3 server side encryption properties

Development

Developing together with Commons library

This project depends on Common Module for Apache Kafka Connect. Normally, an artifact of it published to a globally accessible repository is used. However, if you need to introduce changes to both this connector and Commons library at the same time, you should short-circuit the development loop via locally published artifacts. Please follow this steps:

  1. Checkout the main HEAD of Commons.
  2. Ensure the version here is with -SNAPSHOT prefix.
  3. Make changes to Commons.
  4. Publish it locally with ./gradlew publishToMavenLocal.
  5. Change the version in the connector's build.gradle (ext.aivenConnectCommonsVersion) to match the published snapshot version of Commons.

After that, the latest changes you've done to Commons will be used.

When you finish developing the feature and is sure Commons won't need to change:

  1. Make a proper release of Commons.
  2. Publish the artifact to the currently used globally accessible repository.
  3. Change the version of Commons in the connector to the published one.

Integration testing

Integration tests are implemented using JUnit, Gradle and Docker.

To run them, you need:

Integration testing doesn't require valid AWS credentials.

To simulate AWS S3 behaviour, tests use LocalStack.

In order to run the integration tests, execute from the project root directory:

./gradlew clean integrationTest

License

This project is licensed under the Apache License, Version 2.0.

Trademarks

Apache Kafka, Apache Kafka Connect are either registered trademarks or trademarks of the Apache Software Foundation in the United States and/or other countries. AWS S3 is a trademark and property of their respective owners. All product and service names used in this website are for identification purposes only and do not imply endorsement.