Open authorjapps opened 5 years ago
Can you explain clearly how these records are being produced onto the topic? I can't see this from your ticket so far. thanks.
Just thinking mate, how it is relevant here. It was produced by the Java client, and the console consumer displays the record.
Anyway here attached the producer run-time debug code screen shot-
Java Producer runtime log is here-
---------------------------------------------------------
kafka.bootstrap.servers - localhost:9092
---------------------------------------------------------
2019-01-24 00:04:37,558 [main] INFO org.jsmart.zerocode.core.kafka.client.BasicKafkaClient - brokers:localhost:9092, topicName:demo-ksql, operation:produce, requestJson:{"records":[{"key":"1548288277538","value":"Hello Created for KSQL demo"}]}
2019-01-24 00:04:37,600 [main] INFO org.apache.kafka.clients.producer.ProducerConfig - ProducerConfig values:
acks = 1
batch.size = 16384
bootstrap.servers = [localhost:9092]
buffer.memory = 33554432
client.dns.lookup = default
client.id = zerocode-producer
compression.type = none
connections.max.idle.ms = 540000
delivery.timeout.ms = 120000
enable.idempotence = false
interceptor.classes = []
key.serializer = class org.apache.kafka.common.serialization.StringSerializer
linger.ms = 0
max.block.ms = 60000
max.in.flight.requests.per.connection = 5
max.request.size = 1048576
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner
receive.buffer.bytes = 32768
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
request.timeout.ms = 30000
retries = 2147483647
retry.backoff.ms = 100
sasl.client.callback.handler.class = null
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.mechanism = GSSAPI
security.protocol = PLAINTEXT
send.buffer.bytes = 131072
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
ssl.endpoint.identification.algorithm = https
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLS
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
transaction.timeout.ms = 60000
transactional.id = null
value.serializer = class org.apache.kafka.common.serialization.StringSerializer
2019-01-24 00:04:37,781 [main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka version : 2.1.0
2019-01-24 00:04:37,782 [main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka commitId : eec43959745f444f
2019-01-24 00:04:37,846 [main] WARN org.jsmart.zerocode.core.kafka.helper.KafkaProducerHelper - Could not find path '$.recordType' in the request. returned default type 'RAW'.
2019-01-24 00:04:37,856 [main] INFO org.jsmart.zerocode.core.kafka.send.KafkaSender - Sending record number: 0
2019-01-24 00:04:50,361 [main] INFO org.jsmart.zerocode.core.kafka.send.KafkaSender - Synchronous Producer sending record - ProducerRecord(topic=demo-ksql, partition=null, headers=RecordHeaders(headers = [], isReadOnly = false), key=1548288277538, value=Hello Created for KSQL demo, timestamp=null)
2019-01-24 00:08:41,772 [kafka-producer-network-thread | zerocode-producer] WARN org.apache.kafka.clients.NetworkClient - [Producer clientId=zerocode-producer] Error while fetching metadata with correlation id 1 : {demo-ksql=LEADER_NOT_AVAILABLE}
2019-01-24 00:08:41,773 [kafka-producer-network-thread | zerocode-producer] INFO org.apache.kafka.clients.Metadata - Cluster ID: 1pdIIv-xTbezDU-kUE6eHA
2019-01-24 00:08:41,912 [main] INFO org.jsmart.zerocode.core.kafka.send.KafkaSender - Record was sent to partition- 0, with offset- 0
2019-01-24 00:08:41,918 [main] INFO org.jsmart.zerocode.core.kafka.send.KafkaSender - deliveryDetails- {"status":"Ok","recordMetadata":{"offset":0,"timestamp":1548288521888,"serializedKeySize":13,"serializedValueSize":27,"topicPartition":{"hash":749715182,"partition":0,"topic":"demo-ksql"}}}
2019-01-24 00:08:41,919 [main] INFO org.apache.kafka.clients.producer.KafkaProducer - [Producer clientId=zerocode-producer] Closing the Kafka producer with timeoutMillis = 9223372036854775807 ms.
2019-01-24 00:08:41,933 [main] INFO org.jsmart.zerocode.core.runner.StepNotificationHandler -
***Step PASSED:Print Topic Records via KSQL query->load_kafka
2019-01-24 00:08:41,936 [main] INFO org.jsmart.zerocode.core.runner.ZeroCodeMultiStepsScenarioRunnerImpl -
--------- TEST-STEP-CORRELATION-ID: c874d242-d68b-4699-a608-fb86a13976c0 ---------
*requestTimeStamp:2019-01-24T00:04:37.549
step:load_kafka
url:kafka-topic:demo-ksql
method:produce
request:
{
"records" : [ {
"key" : "1548288277538",
"value" : "Hello Created for KSQL demo"
} ]
}
--------- TEST-STEP-CORRELATION-ID: c874d242-d68b-4699-a608-fb86a13976c0 ---------
Response:
{
"status" : "Ok",
"recordMetadata" : {
"offset" : 0,
"timestamp" : 1548288521888,
"serializedKeySize" : 13,
"serializedValueSize" : 27,
"topicPartition" : {
"hash" : 749715182,
"partition" : 0,
"topic" : "demo-ksql"
}
}
}
*responseTimeStamp:2019-01-24T00:08:41.927
*Response delay:244378.0 milli-secs
---------> Assertion: <----------
{
"status" : "Ok"
}
-done-
This time there is no comma in the record(I removed the comma) i.e.
"Hello Created for KSQL demo"
Earlier it was
"Hello, Created for KSQL demo"
and the ksql-cli now shows the record.
ksql> print 'demo-ksql' from beginning;
Format:STRING
1/24/19 12:08:41 AM UTC , 1548288277538 , Hello Created for KSQL demo
So far so good 👍.
Over the REST
call for the same command
, see the screen shot below. The REST client goes on Lading...
and doesn't return anything.
Thanks for the detailed message @authorjapps . This is strange indeed, especially the part about removing the comma causing the right serde to be chosen by print. Which version of KSQL are you using?
Here is the code that selects the format for 'print' on master: https://github.com/confluentinc/ksql/blob/master/ksql-rest-app/src/main/java/io/confluent/ksql/rest/server/resources/streaming/TopicStream.java
The presence of a comma in the string should not matter: the formatter will try to deserialize the value of the message as an avro record, and only choose the avro formatter if the deserialization succeeded.
Are you sure that the topic didn't have a mix of string and avro records?
Thanks @apurvam . Thanks for sharing the source code 👍. Looks interesting. That opens of couple of more scenarios for us to handle in our framework.
Are you sure that the topic didn't have a mix of string and avro records?
Ans - I have tested both cases
1)mix of string and avro records <---- Definitely doesn't work. Cast error already explained
2)only string records <----- works fine now 👍
3)only avro records <---- Will confirm soon (vaguely remember seen issues)
1)
Producing with a comma (using Avro SerDe) "records" : [ { "key" : "1550409362841", "value" : "Hello, Created for KSQL demo(avro)" } ]
Producing was success root@b2f5eb2568a8:/# kafka-console-consumer --bootstrap-server kafka:29092 --topic demo-ksql-avro-1 --from-beginning DHello, Created for KSQL demo(avro)
But ksql> print 'demo-ksql-avro-1' from beginning; java.lang.String cannot be cast to org.apache.avro.generic.GenericRecord ksql>
2)
Producing without a comma (using Avro SerDe) "records" : [ { "key" : "1550409362842", "value" : "Hello Created for KSQL demo(avro)" } ]
Producing was success(see below) root@b2f5eb2568a8:/# kafka-console-consumer --bootstrap-server kafka:29092 --topic demo-ksql-avro-2 --from-beginning BHello Created for KSQL demo(avro)
But ksql> print 'demo-ksql-avro-2' from beginning; java.lang.String cannot be cast to org.apache.avro.generic.GenericRecord ksql>
+ 4)Will try fresh validation with the comma in the string <--- Will come back here
This works well with a comma and without a comma (with String SerDe)
"Hello, Created for KSQL demo" "Hello Created for KSQL demo"
ksql> print 'demo-ksql-2' from beginning; Format:STRING 2/17/19 1:10:22 PM UTC , 1550409022399 , Hello, Created for KSQL demo 2/17/19 1:10:41 PM UTC , 1550409041040 , Hello Created for KSQL demo
Sounds like `String` records here are not recognized as Avro while trying to read via KSQL ClI. With or without comma doesn't seem like relevant even. That's my understanding.
Your below explaination that it will still `deserialize the value of the message as an avro record` is still concerning me. Please advise(there is a chance I am doing something wrong here)
> The presence of a comma in the string should not matter: the formatter will try to deserialize the value of the message as an avro record, and only choose the avro formatter if the deserialization succeeded.
When you get a chance, could you please throw some light on the below(reported in the above ticket)
Over the REST call for the same command, see the screen shot below. The REST client goes on Lading... and doesn't return anything.
@apurvam , sorry for the late reply. I have updated the details to your earlier reply now on String n Avro record.
Which version of KSQL are you using?
cp-ksql-server:5.1.0
Do you advise using any higher version than this?
I will keep the http(REST) issue separate from this by raising another ticket, as it seems like mixed up with CLI/java-client issue.
Hello, I have hit with this issue while querying KSQL server
String
SerDeKafkaAvro
SerDeVia REST api call -
String
SerDeVia REST api call -
KafkaAvro
SerDe200
http response code. Better to throw as 400 or 500 (+)Via KSQL-cli
Console consumer output
KSQL-Server log
Docker File and Tests - To reproduce(in case you want to run locally)
Docker file is here
JUnit Test is here - String
JUnit Test is here - Avro
Just run as JUnit (right click and run the these tests), after you bring up the docker.
Then observe the KSQL server log and IDE console log.
:::Note:::
I will try producing JSON records and stick the logs here when I get chance. Also try with
kafkacat
as advised by @rmoff .