Open reidmeyer opened 4 months ago
Hello @reidmeyer, before digging into the issue, we fixed a few issues that are probably related in 2.5.11.Final. Do you mind giving it a try? (Sorry for the delay, I've been on PTO)
@carlesarnal, will do! I will get back to you next week after I try again.
@carlesarnal, I'm still getting the same error :(
I also tested with 2.6.1.Final
2024-07-23 11:48:02 INFO <_> [io.apicurio.common.apps.logging.audit.AuditLogService] (executor-thread-2) apicurio.audit action="register" result="success" src_ip="100.80.227.245" x_forwarded_for="172.26.140.31" artifact_id="tst-kpn-des--reid-magic-byte-avro-value"
2024-07-23T11:49:00.959586192Z 2024-07-23 11:49:00 INFO <_> [io.apicurio.common.apps.logging.audit.AuditLogService] (executor-thread-2) apicurio.audit action="register" result="success" src_ip="100.80.227.245" x_forwarded_for="172.26.140.31" artifact_id="tst-kpn-des--reid-magic-byte-avro-value"
2024-07-23T11:49:08.165792053Z 2024-07-23 11:49:08 INFO <_> [io.apicurio.common.apps.logging.audit.AuditLogService] (executor-thread-2) apicurio.audit action="request" result="failure" src_ip="100.82.94.88" x_forwarded_for="100.81.129.119" path="/apis/ccompat/v7/subjects/tst-kpn-des--reid-magic-byte-avro-value" response_code="404" method="POST" user=""
│ org.apache.kafka.connect.errors.ConnectException: Tolerance exceeded in error handler │
│ at org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndHandleError(RetryWithToleranceOperator.java:230) │
│ at org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execute(RetryWithToleranceOperator.java:156) │
│ at org.apache.kafka.connect.runtime.WorkerSinkTask.convertAndTransformRecord(WorkerSinkTask.java:528) │
│ at org.apache.kafka.connect.runtime.WorkerSinkTask.convertMessages(WorkerSinkTask.java:505) │
│ at org.apache.kafka.connect.runtime.WorkerSinkTask.poll(WorkerSinkTask.java:341) │
│ at org.apache.kafka.connect.runtime.WorkerSinkTask.iteration(WorkerSinkTask.java:242) │
│ at org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:211) │
│ at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:204) │
│ at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:259) │
│ at org.apache.kafka.connect.runtime.isolation.Plugins.lambda$withClassLoader$1(Plugins.java:181) │
│ at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:539) │
│ at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264) │
│ at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136) │
│ at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635) │
│ at java.base/java.lang.Thread.run(Thread.java:840) │
│ Caused by: org.apache.kafka.connect.errors.DataException: Failed to deserialize data for topic tst-kpn-des--reid-magic-byte-avro to Avro: │
│ at io.confluent.connect.avro.AvroConverter.toConnectData(AvroConverter.java:148) │
│ at org.apache.kafka.connect.runtime.WorkerSinkTask.lambda$convertAndTransformRecord$4(WorkerSinkTask.java:528) │
│ at org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndRetry(RetryWithToleranceOperator.java:180) │
│ at org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndHandleError(RetryWithToleranceOperator.java:214) │
│ ... 14 more │
│ Caused by: org.apache.kafka.common.errors.SerializationException: Error retrieving Avro value schema version for id 1 │
│ at io.confluent.kafka.serializers.AbstractKafkaSchemaSerDe.toKafkaException(AbstractKafkaSchemaSerDe.java:805) │
│ at io.confluent.kafka.serializers.AbstractKafkaAvroDeserializer.schemaVersion(AbstractKafkaAvroDeserializer.java:222) │
│ at io.confluent.kafka.serializers.AbstractKafkaAvroDeserializer.deserializeWithSchemaAndVersion(AbstractKafkaAvroDeserializer.java:269) │
│ at io.confluent.connect.avro.AvroConverter$Deserializer.deserialize(AvroConverter.java:199) │
│ at io.confluent.connect.avro.AvroConverter.toConnectData(AvroConverter.java:126) │
│ ... 17 more │
│ Caused by: io.confluent.kafka.schemaregistry.client.rest.exceptions.RestClientException: The given schema does not match any schema under the subject tst-kpn-des--reid-magic-byte-avro-value; error code: 40403 │
│ at io.confluent.kafka.schemaregistry.client.rest.RestService.sendHttpRequest(RestService.java:336) │
│ at io.confluent.kafka.schemaregistry.client.rest.RestService.httpRequest(RestService.java:409) │
│ at io.confluent.kafka.schemaregistry.client.rest.RestService.lookUpSubjectVersion(RestService.java:500) │
│ at io.confluent.kafka.schemaregistry.client.rest.RestService.lookUpSubjectVersion(RestService.java:485) │
│ at io.confluent.kafka.schemaregistry.client.CachedSchemaRegistryClient.getVersionFromRegistry(CachedSchemaRegistryClient.java:353) │
│ at io.confluent.kafka.schemaregistry.client.CachedSchemaRegistryClient.getVersion(CachedSchemaRegistryClient.java:609) │
│ at io.confluent.kafka.schemaregistry.client.CachedSchemaRegistryClient.getVersion(CachedSchemaRegistryClient.java:589) │
│ at io.confluent.kafka.serializers.AbstractKafkaAvroDeserializer.schemaVersion(AbstractKafkaAvroDeserializer.java:204) │
│ ... 20 more
my kafka connect spec:
class: io.confluent.connect.jdbc.JdbcSinkConnector
tasksMax: 1
config:
topics: tst-kpn-des--reid-magic-byte-avro
value.converter.schemas.enable: true
value.converter: io.confluent.connect.avro.AvroConverter
value.converter.schema.registry.url: myapicurio/apis/ccompat/v7
connection.url: myurl
connection.user: ${env:POSTGRES_EXAMPLE_USERNAME}
connection.password: ${env:POSTGRES_EXAMPLE_PASSWORD}
auto.create: true
Description
Registry Version: 2.5.9.Final Persistence type: in-memory
I'm using Kafka Connect, and I'm trying to use the confluent compatible api from apicurio. I expect it should work with the confluent avro converter.
But I'm getting an error in the confluent library like:
The given schema does not match any schema under the subject tst-kpn-des--reid-magic-byte-avro-key; error code: 40403
It's possible their is a bug in the confluent library, but just posting here for some direction.I have a value and key schema on apicurio with content/global id of 1 and 2.
I was thinking maybe something canonical related is going on..? my next step was to look into the default setting on that within apicurio.
My config:
I produce messages onto my topic like:
Environment
Kubernetes: v1.26.15 Kafka Connect from Strimzi: 3.7.0 confluent avro converter: 7.6.1 confluent jdbc sink: 10.7.6
Steps to Reproduce
Expected vs Actual Behaviour
I expect it to successfully grab the schema. Totally possible I'm doing something very wrong. Hoping for some guidance.
Logs