The connector is used to load data both from Kafka to Mongodb and from Mongodb to Kafka.
You can build the connector with Maven using the standard lifecycle phases:
mvn clean
mvn package
When the connector is run as a Source Connector, it reads data from Mongodb oplog and publishes it on Kafka. 3 different types of messages are read from the oplog:
For every message, a SourceRecord is created, having the following schema:
{
"type": "record",
"name": "schemaname",
"fields": [
{
"name": "timestamp",
"type": [
"null",
"int"
]
},
{
"name": "order",
"type": [
"null",
"int"
]
},
{
"name": "operation",
"type": [
"null",
"string"
]
},
{
"name": "database",
"type": [
"null",
"string"
]
},
{
"name": "object",
"type": [
"null",
"string"
]
}
],
"connect.name": "stillmongotesting"
}
name=mongodb-source-connector
connector.class=org.apache.kafka.connect.mongodb.MongodbSourceConnector
tasks.max=1
uri=mongodb://127.0.0.1:27017
batch.size=100
schema.name=mongodbschema
topic.prefix=optionalprefix
databases=mydb.test1,mydb.test2,mydb.test3
{schema.name}_{database}_{collection}
{topic.prefix}_{database}_{collection}
org.apache.kafka.connect.mongodb.converter.JsonStructConverter
, but due backward compatibility the default is org.apache.kafka.connect.mongodb.converter.StringStructConverter
. When the connector is run as Sink, it retrieves messages from Kafka and writes them on mongodb collections. The structure of the written document is derived from the schema of the messages.
name=mongodb-sink-connector
connector.class=org.apache.kafka.connect.mongodb.MongodbSinkConnector
tasks.max=1
uri=mongodb://127.0.0.1:27017
bulk.size=100
mongodb.database=databasetest
mongodb.collections=mydb_test1,mydb_test2,mydb_test3
converter.class=org.apache.kafka.connect.mongodb.converter.JsonStructConverter
topics=optionalprefix_mydb_test1,optionalprefix_mydb_test2,optionalprefix_mydb_test3
The number of collections and the number of topics should be the same.