confluentinc / confluent-cli

Confluent Platform CLI
Other
60 stars 38 forks source link

Connector creation silently fails #28

Open rmoff opened 7 years ago

rmoff commented 7 years ago

One connector (file_poem_tail) already created. Trying now to create a second one (jdbc_sink_poem). The command emits no error message, but no connector is created.

confluent-3.3.0-SNAPSHOT-20170629> curl "http://localhost:8083/connectors"
["file_poem_tail"]

confluent-3.3.0-SNAPSHOT-20170629>
confluent-3.3.0-SNAPSHOT-20170629> ./bin/confluent load jdbc_sink_poem -d ../connector_config/jdbc_sink_poem.json
confluent-3.3.0-SNAPSHOT-20170629> curl "http://localhost:8083/connectors"
["file_poem_tail"]

confluent-3.3.0-SNAPSHOT-20170629> cat ../connector_config/jdbc_sink_poem.json
{ "name":"jdbc_sink_poem","config": {
  "topics": "dummy_topic",
  "connector.class": "io.confluent.connect.jdbc.JdbcSinkConnector",
  "name": "poem_sqlite",
  "connection.url": "jdbc:sqlite:/Users/Robin/cp/confluent-3.3.0-SNAPSHOT-20170629/testdb",
  "auto.create": true,
  "auto.evolve": true}
}

The connect log just shows an INFO:

[2017-07-03 11:06:52,617] INFO SinkConnectorConfig values:
        connector.class = io.confluent.connect.jdbc.JdbcSinkConnector
        key.converter = null
        name = poem_sqlite
        tasks.max = 1
        topics = [dummy_topic]
        transforms = null
        value.converter = null
 (org.apache.kafka.connect.runtime.SinkConnectorConfig:223)
[2017-07-03 11:06:52,617] INFO EnrichedConnectorConfig values:
        connector.class = io.confluent.connect.jdbc.JdbcSinkConnector
        key.converter = null
        name = poem_sqlite
        tasks.max = 1
        topics = [dummy_topic]
        transforms = null
        value.converter = null
 (org.apache.kafka.connect.runtime.ConnectorConfig$EnrichedConnectorConfig:223)
[2017-07-03 11:07:22,774] INFO 0:0:0:0:0:0:0:1 - - [03/Jul/2017:10:06:52 +0000] "POST /connectors HTTP/1.1" 201 285  30158 (org.apache.kafka.connect.runtime.rest.RestServer:60) 
rmoff commented 7 years ago

I was seeing similar behaviour using the REST api directly. After restarting Connect Distributed (single node), the connector creation succeeded.

confluent-3.3.0-SNAPSHOT-20170629> ./bin/confluent load jdbc_sink_poem -d ../connector_config/jdbc_sink_poem.json
{
  "name": "jdbc_sink_poem",
  "config": {
    "topics": "dummy_topic",
    "connector.class": "io.confluent.connect.jdbc.JdbcSinkConnector",
    "name": "poem_sqlite",
    "connection.url": "jdbc:sqlite:/Users/Robin/cp/confluent-3.3.0-SNAPSHOT-20170629/testdb",
    "auto.create": "true",
    "auto.evolve": "true"
  },
  "tasks": []
}
confluent-3.3.0-SNAPSHOT-20170629> curl "http://localhost:8083/connectors"
["file_poem_tail","jdbc_sink_poem"]

confluent-3.3.0-SNAPSHOT-20170629>

Don't know what was going on here :-( I guess it's more a connect problem than confluent CLI. Perhaps the CLI could do more stringent checking for success?

drakelee commented 4 years ago

Not sure if this is related but spent the last few days pulling my hair out trying to figure it out. Restarting the cluster had no effect for me so I'm assuming it's different but leaving this here just in case others run into it.

Was also getting 201 success when hitting the connector API but no connectors and no tasks were created. Turned on the debug logs and saw nothing out of the ordinary as well.

Finally got access to the broker's logs and saw: java.lang.IllegalArgumentException: Magic v1 does not support record headers 🤦‍♂

Hopefully this saves someone some time!

Edit: Actually! Discovered my error with some help. We are using datadog which "injects" headers automatically. Removing the datadog agent solved the issue...