Closed BibhutiBhusanDora closed 3 months ago
Yes. Re initialising the producer again and again can be very expensive and not recommended. You will find reusing the same producer is far more efficient and the number of messages you can produce per second will increase dramatically.
Yes. Re initialising the producer again and again can be very expensive and not recommended. You will find reusing the same producer is far more efficient and the number of messages you can produce per second will increase dramatically.
Thanks for the response, just want the official confirmation from Kafka team. Also , can Confluent team can come up with an extension method for creating the objects of Producer and Consumer client , that would be great.
yes, creating a singleton service like that is a good pattern. you definitely should not create a producer each time you want to produce a message - it is approximately 500,000 times less efficient.
yes, creating a singleton service like that is a good pattern. you definitely should not create a producer each time you want to produce a message - it is approximately 500,000 times less efficient.
So this means, I need to call the Flush() and Dispose() method only once when my service is being terminated or stopped. Well making it singleton will keep the connection open with the broker all the time, that may be an issue for the broker adminstrator for my application.
i have save question. i create a producer each time when produce a message now, but it inefficient
https://github.com/confluentinc/confluent-kafka-dotnet/issues/159
the broker kills open connections that aren't being used (by default after 10 minutes), and they will be reopened again as required.
On Mon, Sep 28, 2020 at 1:03 AM moxixuan notifications@github.com wrote:
i have save question. i create a producer each time when produce a message now, but it inefficient
— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/confluentinc/confluent-kafka-dotnet/issues/1346#issuecomment-699848729, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAAEVZQOZLIN4WTN2GXPKNLSIA7NHANCNFSM4OS2PRHQ .
Hi @mhowlett , I have a few questions about using the producer as a singleton.
I understand that its better performance wise, but what happens when there is an exception producing a message? Is there a chance that the producer ends in a bad state and must be disposed and create a new one?
in some very rare circumstances: https://github.com/edenhill/librdkafka/blob/master/INTRODUCTION.md#fatal-producer-errors . you shouldn't write your application expecting that will be common.
Is there a way to detect those fatal errors? maybe wrapped in a specific type of exception?
We plan to use this package inside an AWS Lambda, and since we are not in control of the lifetime of the instance, we just want to let the instance live in case of a recoverable error and kill it when the producer is in a bad state.
Do you think that's a viable solution? Or its better to just wrap producer.Produce with a try/catch and kill the lambda in case of an exception?
Thanks for your time.
IsFatal
flag on the Error
property on the exception will be true in the case of a fatal error.SetErrorHandler
if there is a fatal error.@mhowlett - are there any recommendations for lifetimes on the consumer side of things? is it similar that creating a consumer is expensive like a producer?
yes, it's similarly expensive. depending on your use-case, you could consider building a consumer pool.
@mhowlett - Could you please update the web-example to include examples of utilizing JSON Serializer as well? Existing serializer samples are disposing the client after each request, which is not efficient.
@mhowlett hello! Could you please give some advice on the usage? If Producer is recommended as Singleton, does it mean that it is also thread-safe? Should we create a Singleton Producer per application or a Singleton Producer per topic?
Oh, I've already found the answer to this question here: https://github.com/confluentinc/confluent-kafka-dotnet/issues/1041
Sorry to bother you. Thanks for addressing this other issue though! :)
Closing, as all the queries are answered.
Description
Would it be recommended to use the same instance of producer instead of creating and disposing it again and again in a producer client which is supposed to be running 24/7 and processing messages most of the time.
How to reproduce
NA
Checklist
Please provide the following information:
using Confluent.Kafka; using Microsoft.Extensions.Configuration; using Microsoft.Extensions.DependencyInjection; using System; using System.IO;
namespace KafkaExtensions { class Program { public static IServiceProvider serviceProvider;
} namespace Microsoft.Extensions.DependencyInjection { public static class KafkaProducerServiceExtension {
}