Closed ZijunH closed 2 months ago
Hey, it’s something i was considering to do a few months ago. My only concern with this is that someone can try to poll() and log from a single thread and that will be really inefficient and also cause a deadlock if they call logger->flush
this can be useful only when the user has a second thread that is doing some other work and wants to run the logger inside
Maybe add poll()
as an advanced option and disencourage new users using it without fully understanding the consequences.
As this library prides itself in being performance oriented, I believe it is very common that the users are running multi-threaded programs.
have a look at the PR and this example and let me know if you are happy with this to merge it
#include "quill/Backend.h"
#include "quill/Frontend.h"
#include "quill/LogMacros.h"
#include "quill/Logger.h"
#include "quill/sinks/ConsoleSink.h"
#include "quill/TriviallyCopyableCodec.h"
#include <iostream>
#include <string>
#include <utility>
int main()
{
std::thread backend_worker([]()
{
quill::ManualBackendWorker* manual_backend_worker = quill::Backend::get_manual_backend_worker();
quill::BackendOptions backend_options;
manual_backend_worker->init(backend_options);
while (true)
{
manual_backend_worker->poll();
}
});
// Frontend
auto console_sink = quill::Frontend::create_or_get_sink<quill::ConsoleSink>("sink_id_1");
quill::Logger* logger = quill::Frontend::create_or_get_logger("root", std::move(console_sink));
// Change the LogLevel to print everything
logger->set_log_level(quill::LogLevel::TraceL3);
// A log message with number 123
int a = 123;
std::string l = "log";
LOG_INFO(logger, "A {} message with number {}", l, a);
// libfmt formatting language is supported 3.14e+00
double pi = 3.141592653589793;
LOG_INFO(logger, "libfmt formatting language is supported {:.2e}", pi);
backend_worker.join();
}
Hey, thanks for the prompt response. The example and the PR is exactly what I have in mind.
Just a small question. In the important notes you claim "Logging from within the thread running ManualBackendWorker
is highly inefficient and should be avoided whenever possible". The inefficiency stems from the spsc queues not being cleared quick enough, so it could bloat in size? Is the logging latency in the frontend affected?
You can log as long as you don’t call logger->flush_log(), the same thread will be doing the formatting and the io with additional overhead
https://quillcpp.readthedocs.io/en/latest/overview.html#design
Frontend Thread
Backend Thread
Now if you have a single threaded application all those steps will run on the same thread. You see you don’t need all of those queues and processing, you could just fully format -> write to file
Ok sounds good. Thanks for the PR
#include "quill/Backend.h"
#include "quill/Frontend.h"
#include "quill/LogMacros.h"
#include "quill/Logger.h"
#include "quill/sinks/ConsoleSink.h"
#include "quill/TriviallyCopyableCodec.h"
#include <iostream>
#include <string>
#include <utility>
int main()
{
// Frontend
auto console_sink = quill::Frontend::create_or_get_sink<quill::ConsoleSink>("sink_id_1");
quill::Logger* logger = quill::Frontend::create_or_get_logger("root", std::move(console_sink));
// Change the LogLevel to print everything
logger->set_log_level(quill::LogLevel::TraceL3);
std::thread backend_worker(
[]()
{
quill::ManualBackendWorker* manual_backend_worker = quill::Backend::acquire_manual_backend_worker();
quill::BackendOptions backend_options;
manual_backend_worker->init(backend_options);
auto logger = quill::Frontend::get_logger("root");
LOG_INFO(logger, "Hello from backend");
while (true)
{
LOG_INFO_LIMIT(std::chrono::seconds {1}, logger, "poll()");
manual_backend_worker->poll();
}
});
// A log message with number 123
int a = 123;
std::string l = "log";
LOG_INFO(logger, "A {} message with number {}", l, a);
// libfmt formatting language is supported 3.14e+00
double pi = 3.141592653589793;
LOG_INFO(logger, "libfmt formatting language is supported {:.2e}", pi);
backend_worker.join();
}
Motivation
Currently, the only way to start the BackendWorker is to launch a thread. On a system with a small number of cores, this means either a single core needs to be dedicated to running the backend thread by setting the CPU affinities or sharing a core with some other thread. The former solution essentially has a single core doing very little work, and the latter solution causes context switching which is prohibitively expensive for latency sensitive applications.
Proposal
The function
quill::BackendWorker::run
should be broken up to the following function and exposed to the user:poll()
: dispatch messages to sink and returns when there are no more messagespoll_once()
: dispatch a single messageThis way, the motivating example can be written as follows, without the need to create an additional thread. It should also provide the user with more flexiblity on controlling the backend behaviour.