confluentinc / confluent-kafka-dotnet

Confluent's Apache Kafka .NET client
https://github.com/confluentinc/confluent-kafka-dotnet/wiki
Apache License 2.0
53 stars 857 forks source link

Thoughts on using Producer instance as a singleton. #1346

Closed BibhutiBhusanDora closed 3 months ago

BibhutiBhusanDora commented 4 years ago

Description

Would it be recommended to use the same instance of producer instead of creating and disposing it again and again in a producer client which is supposed to be running 24/7 and processing messages most of the time.

How to reproduce

NA

Checklist

Please provide the following information:

using Confluent.Kafka; using Microsoft.Extensions.Configuration; using Microsoft.Extensions.DependencyInjection; using System; using System.IO;

namespace KafkaExtensions { class Program { public static IServiceProvider serviceProvider;

    static void Main(string[] args)
    {
        try
        {
            RegisterServices();              

            var producer = serviceProvider.GetRequiredService<IProducer<Null, string>>();
            var message = new Message<Null, string>();
            message.Value = "Testing_Singleton";
            var topicName = serviceProvider.GetRequiredService<IConfiguration>()["TopicName"];
            producer.Produce(topicName, message,deliveryHandler=> {
                if (deliveryHandler.Error.IsError)
                    Console.WriteLine(deliveryHandler.Error.Reason);
                else
                    Console.WriteLine(deliveryHandler.TopicPartitionOffset);
            });
            Console.ReadLine();
        }
        catch(Exception ex)
        {
            Console.WriteLine(ex.Message);
        }
        finally
        {

        }

    }
    static void RegisterServices()
    {          
       var services = new ServiceCollection();
       services.AddProducer<Null, string>();
       serviceProvider = services.BuildServiceProvider();

    }
    private static void DeliveryReportHandler(DeliveryReport<Null,string> deliveryReport)
    {
        if (deliveryReport.Error.IsError)
            Console.WriteLine(deliveryReport.Error.Reason);
        else
            Console.WriteLine(deliveryReport.TopicPartitionOffset);
    }
}

} namespace Microsoft.Extensions.DependencyInjection { public static class KafkaProducerServiceExtension {

    public static void AddProducer<TKey, TValue>(this IServiceCollection services)
    {
        var config = new ConfigurationBuilder()
          .SetBasePath(Directory.GetCurrentDirectory())
          .AddJsonFile("appSettings.json", optional: false)
          .Build();
        services.AddSingleton<IConfiguration>(config);

        var producerConfig = new ProducerConfig();
        config.GetSection("ProducerConfigurations").Bind(producerConfig);
        producerConfig.ClientId = Environment.MachineName;

        var producerBuilder = new ProducerBuilder<TKey, TValue>(producerConfig);
        services.AddSingleton(producerBuilder.Build());
    }

}

}

bristowsutor commented 4 years ago

Yes. Re initialising the producer again and again can be very expensive and not recommended. You will find reusing the same producer is far more efficient and the number of messages you can produce per second will increase dramatically.

BibhutiBhusanDora commented 4 years ago

Yes. Re initialising the producer again and again can be very expensive and not recommended. You will find reusing the same producer is far more efficient and the number of messages you can produce per second will increase dramatically.

Thanks for the response, just want the official confirmation from Kafka team. Also , can Confluent team can come up with an extension method for creating the objects of Producer and Consumer client , that would be great.

mhowlett commented 4 years ago

yes, creating a singleton service like that is a good pattern. you definitely should not create a producer each time you want to produce a message - it is approximately 500,000 times less efficient.

BibhutiBhusanDora commented 4 years ago

yes, creating a singleton service like that is a good pattern. you definitely should not create a producer each time you want to produce a message - it is approximately 500,000 times less efficient.

So this means, I need to call the Flush() and Dispose() method only once when my service is being terminated or stopped. Well making it singleton will keep the connection open with the broker all the time, that may be an issue for the broker adminstrator for my application.

moxixuan commented 3 years ago

i have save question. i create a producer each time when produce a message now, but it inefficient

https://github.com/confluentinc/confluent-kafka-dotnet/issues/159

mhowlett commented 3 years ago

the broker kills open connections that aren't being used (by default after 10 minutes), and they will be reopened again as required.

On Mon, Sep 28, 2020 at 1:03 AM moxixuan notifications@github.com wrote:

i have save question. i create a producer each time when produce a message now, but it inefficient

— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/confluentinc/confluent-kafka-dotnet/issues/1346#issuecomment-699848729, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAAEVZQOZLIN4WTN2GXPKNLSIA7NHANCNFSM4OS2PRHQ .

JuanZamudioGBM commented 3 years ago

Hi @mhowlett , I have a few questions about using the producer as a singleton.

I understand that its better performance wise, but what happens when there is an exception producing a message? Is there a chance that the producer ends in a bad state and must be disposed and create a new one?

mhowlett commented 3 years ago

in some very rare circumstances: https://github.com/edenhill/librdkafka/blob/master/INTRODUCTION.md#fatal-producer-errors . you shouldn't write your application expecting that will be common.

JuanZamudioGBM commented 3 years ago

Is there a way to detect those fatal errors? maybe wrapped in a specific type of exception?

We plan to use this package inside an AWS Lambda, and since we are not in control of the lifetime of the instance, we just want to let the instance live in case of a recoverable error and kill it when the producer is in a bad state.

Do you think that's a viable solution? Or its better to just wrap producer.Produce with a try/catch and kill the lambda in case of an exception?

Thanks for your time.

mhowlett commented 3 years ago
KashMoneyMillionaire commented 3 years ago

@mhowlett - are there any recommendations for lifetimes on the consumer side of things? is it similar that creating a consumer is expensive like a producer?

mhowlett commented 3 years ago

yes, it's similarly expensive. depending on your use-case, you could consider building a consumer pool.

ksantoso commented 3 years ago

@mhowlett - Could you please update the web-example to include examples of utilizing JSON Serializer as well? Existing serializer samples are disposing the client after each request, which is not efficient.

mekk1t commented 2 years ago

@mhowlett hello! Could you please give some advice on the usage? If Producer is recommended as Singleton, does it mean that it is also thread-safe? Should we create a Singleton Producer per application or a Singleton Producer per topic?

mekk1t commented 2 years ago

Oh, I've already found the answer to this question here: https://github.com/confluentinc/confluent-kafka-dotnet/issues/1041

Sorry to bother you. Thanks for addressing this other issue though! :)

anchitj commented 3 months ago

Closing, as all the queries are answered.