Open rmoff opened 7 years ago
I was seeing similar behaviour using the REST api directly. After restarting Connect Distributed (single node), the connector creation succeeded.
confluent-3.3.0-SNAPSHOT-20170629> ./bin/confluent load jdbc_sink_poem -d ../connector_config/jdbc_sink_poem.json
{
"name": "jdbc_sink_poem",
"config": {
"topics": "dummy_topic",
"connector.class": "io.confluent.connect.jdbc.JdbcSinkConnector",
"name": "poem_sqlite",
"connection.url": "jdbc:sqlite:/Users/Robin/cp/confluent-3.3.0-SNAPSHOT-20170629/testdb",
"auto.create": "true",
"auto.evolve": "true"
},
"tasks": []
}
confluent-3.3.0-SNAPSHOT-20170629> curl "http://localhost:8083/connectors"
["file_poem_tail","jdbc_sink_poem"]
confluent-3.3.0-SNAPSHOT-20170629>
Don't know what was going on here :-( I guess it's more a connect problem than confluent CLI. Perhaps the CLI could do more stringent checking for success?
Not sure if this is related but spent the last few days pulling my hair out trying to figure it out. Restarting the cluster had no effect for me so I'm assuming it's different but leaving this here just in case others run into it.
Was also getting 201 success when hitting the connector API but no connectors and no tasks were created. Turned on the debug logs and saw nothing out of the ordinary as well.
Finally got access to the broker's logs and saw:
java.lang.IllegalArgumentException: Magic v1 does not support record headers
🤦♂
Hopefully this saves someone some time!
Edit: Actually! Discovered my error with some help. We are using datadog which "injects" headers automatically. Removing the datadog agent solved the issue...
One connector (
file_poem_tail
) already created. Trying now to create a second one (jdbc_sink_poem
). The command emits no error message, but no connector is created.The connect log just shows an INFO: