Open fabiankluempers opened 9 months ago
Hey @fabiankluempers,
Thank you for opening this ticket, an alternative implementation has been written for producing elements that will be released soon. It is defined on top of Producer so it should fix this issue, but I will keep this open until I've released that in a next version.
You can check out the API, and implementation here. Any feedback would be greatly appreciated, https://github.com/nomisRev/kotlin-kafka/pull/156
@nomisRev you might replace:
public suspend fun <A, B> KafkaProducer<A, B>.sendAwait(
record: ProducerRecord<A, B>,
): RecordMetadata =
suspendCoroutine { cont ->
send(record) { a, e ->
if (a != null) cont.resume(a) else cont.resumeWithException(e)
}
}
With:
public suspend fun <A, B> KafkaProducer<A, B>.sendAwait(
record: ProducerRecord<A, B>,
): RecordMetadata =
suspendCancellableCoroutine { cont ->
send(record) { a, e ->
if (a != null) cont.resume(a) else cont.resumeWithException(e)
}
}
It fixes the issue completely (I tried it myself)
Thank you @lsafer-meemer, but I have already solved this problem by building a much better solution.
Publishing now works like this, and doesn't swallow any exceptions. It also allows to offer
instead of sendAndAwait
which is much faster and recommended by Kafka.
I'm releasing in coming 2 weeks a new version of Kotlin Kafka, with these updates and updated documentation. Sorry for the wait. It's been busy during the end of last year, and beginning of this year.
publisher.publishScope {
offer((1..10).map {
ProducerRecord(topic.name(), "$it", "msg-$it")
})
publish((11..20).map {
ProducerRecord(topic.name(), "$it", "msg-$it")
})
transaction {
// transaction { } illegal to be called here DslMarker magic
offer((21..30).map {
ProducerRecord(topic.name(), "$it", "msg-$it")
})
publish((31..40).map {
ProducerRecord(topic.name(), "$it", "msg-$it")
})
}// Waits until all offer finished in transaction, fails if any failed
// looping
(0..100).forEach {
delay(100.milliseconds)
val record = ProducerRecord(topic.name(), "$it", "msg-$it")
offer(record)
}
// streaming
flow(1..100)
.onEach { delay(100.milliseconds) }
.map { ProducerRecord(topic.name(), "$it", "msg-$it") }
.collect { offer(it) }
}
@nomisRev it actually looks better and well organized. cant wait for the new release 🔥
I know this is not the right place to ask this. but, please care about the ease of loading settings and configurations. Dummer people like me might choose to stick with the java APIs just because the better APIs needs configurations that they don't understand or configurations that are too implicit.
I know this is not the right place to ask this. but, please care about the ease of loading settings and configurations. Dummer people like me might choose to stick with the java APIs just because the better APIs needs configurations that they don't understand or configurations that are too implicit.
I'm not 100% what you mean @lsafer-meemer 🤔 Do you have any examples? My goal for the library is to stay as close to possible to the Java SDK, but offer better typed solutions and higher level APIs. Should be super simple to use, and not look completely foreign.
So in the above DSL ProducerRecord
is just the Java SDK type.
Hi, there is a bug in the KafkaProducer#sendAwait.
It checks whether the returned
RecordMetadata
is null and resumes the continuation with the exception if that is the case. But a non-nullRecordMetadata
is passed to the Callback in every case. It just has no offset (-1) and no partition (-1). This means all exceptions are swallowed.An easy fix is to just check if the exception passed to the
Callback
is not null and to resume the continuation with that exception if that is the case.