Open ChangguHan opened 1 year ago
@ChangguHan This is a reasonable request, but the work around is to set an admin instance yourself...
@garyrussell Thank you for your comment.
I understood your point. But It needed a little supplement about SSL if you intended to create a new instance when bootstrap servers from the origin bean and producerFactory are different. https://github.com/spring-projects/spring-kafka/blob/701ed82e6493f813a8e30aa0d29cd116a7fe5c73/spring-kafka/src/main/java/org/springframework/kafka/core/KafkaTemplate.java#L485-L491
Hi, @garyrussell, may i pick it up?
I have some compatibility questions, please give me a hint.
Because of org.apache.kafka.common.Config.SslConfigs
version in iterative process, add some properties, abandoned some of the properties.
Plan A
Refer KafkaProperties
https://github.com/spring-projects/spring-boot/blob/8f2ec227389391fdd173db0ab64f26abd2752f20/spring-boot-project/spring-boot-autoconfigure/src/main/java/org/springframework/boot/autoconfigure/kafka/KafkaProperties.java#L251-L253
Plan B
// generating a clientId is different from common admin clinet
this.producerFactory.getConfigurationProperties().get(CommonClientConfigs.SECURITY_PROTOCOL_CONFIG);
this.producerFactory.getConfigurationProperties().forEach((key, value) -> {
if (key.startsWith("ssl.")) {
props.put(key, value);
}
});
look like Plan B is more flexible, but it feels a little weird
I think I have an issue kinda related to this request. When I want to publish events in parallel and Kafka Admin is not initialized yet, I see several Kafka admin creations/initializations in the logs, and my publishing just stops. As a result, no events are published, and the producing process gets stuck.
I mean something like this: `val executor = Executors.newVirtualThreadPerTaskExecutor() val n = 1000
for (i in 1..n) {
CompletableFuture.runAsync(
{ template.send(inTopic, i.toString(), "value") },
executor);
}`
creation of Kafka Admin beforehand and setting it for the template solves the issue. but I wondering if this is the correct behavior
@PPrydorozhnyi ,
that is not possible if your template
is a singleton bean in the application context.
The logic there is like this:
public void afterSingletonsInstantiated() {
if (this.observationEnabled && this.applicationContext != null) {
this.observationRegistry = this.applicationContext.getBeanProvider(ObservationRegistry.class)
.getIfUnique(() -> this.observationRegistry);
if (this.kafkaAdmin == null) {
this.kafkaAdmin = this.applicationContext.getBeanProvider(KafkaAdmin.class).getIfUnique();
And this afterSingletonsInstantiated()
is called only once when application context is ready.
Hi @artembilan. Thanks for the quick response.
Unfortunately, I'm able to reproduce it without any custom bean scopes or manual bean creation.
Created a small project, so you could try it by yourself - Reproduce example
Could you please check? Thanks in advance.
o.a.k.clients.admin.AdminClientConfig : AdminClientConfig values:
But that's correct, because AdminClient
is not a KafkaAdmin
.
We talk about different object.
Yes, KafkaAdmin
uses AdminClient
and does that this way:
try (AdminClient client = createAdmin()) {
this.clusterId = client.describeCluster().clusterId().get(this.operationTimeout, TimeUnit.SECONDS);
if (this.clusterId == null) {
this.clusterId = "null";
}
}
So, it is not a surprise to see several instances of the AdminClient
created on the fly.
Not sure, though why there has to be many of them since KafkaTemplate
logic is like this:
private String clusterId() {
if (this.kafkaAdmin != null && this.clusterId == null) {
this.clusterId = this.kafkaAdmin.clusterId();
}
return this.clusterId;
}
The clusterId
is resolved only once.
I'll run your application after lunch.
@PPrydorozhnyi ,
can you update your sample project, please, with build tool ? Not clear what dependencies you use there. According to README it supposed to be Gradle, so just add those artifacts into the repo.
@artembilan
yeap, sorry. added
@PPrydorozhnyi ,
I see what is going on.
That KafkaTemplate.clusterId()
is really guilty.
When we call send()
with observation concurrently, all those threads are meeting a this.clusterId == null
condition.
And therefore all of them are calling this.kafkaAdmin.clusterId()
😄
Probably not related to this issue, but still looks like a bug 🤷
@artembilan nice, thanks a lot!
Expected Behavior When using multiple kafkaTemplates with observations, the kafkaAdmin made automatically can connect to kafka with SSL.
Current Behavior When using multiple kafkaTemplates with observations, the properties for sasl, security, ssl is not applied to the kafkaAdmin. It could be the problem when I set
bootstrap.servers
with the port for ssl.Context
I would like to suggest two things.
The sample code would be like this.