Closed vp00 closed 5 months ago
Backtrace messages are still pushed to the spsc queue and then popped and stored by the backend logging thread. In that case only one message is stored. Those messages are only logged if the condition is met.
Are you calling LOG_BACKTRACE
in a loop ?
I'm calling in the game's frame generation loop, so yeah, it's running in a loop. Most of the messages are pushed during the loading screen. Maybe the logging thread slacks behind at the loading screen due to the high volume?
Just hopped in game and went through a couple of loading screens. If truly were a memory leak, I think it'd keep growing with each one. It doesn't, the queue maxes out at the 2mb allocated during the first one.
I guess your explanation solves this issue. Can we set the initial allocation size so we don't have to reallocate?
I guess during the loading time some other thread runs on the same cpu as the backend logging thread and that slows it down.
on my intel i5 12600 on linux when i run the backend logging thread on an isolated cpu this is the max throughput it a achieves
Throughput is 3.99 million msgs/sec average, total time elapsed: 1001 ms for 4000000 log messages
By default an UnboundedBlocking
queue is used that caps at 2GB (2147483648 bytes).
It starts small and then reallocates up to 2Gb and then blocks. The queue never shrinks after reallocation
You can change the initial size (131'072) to avoid reallocation like this, you need to use a power of 2
quill::Config cfg;
cfg.default_queue_capacity = 2'097'152;
// Apply configuration before starting the backend worker thread
quill::configure(cfg);
quill::start();
It is possible to use a no max limit queue that will keep reallocating new queues of 2Gb each instead of blocking.
It is also possible to have bounded blocking queue with a fix size that blocks and won't reallocate or a bounded dropping queue that will drop the log messages
See after this line for the additional options https://github.com/odygrd/quill/blob/master/quill/include/quill/TweakMe.h#L126
And here there are some examples how to use them
https://github.com/odygrd/quill/blob/master/examples/example_bounded_queue_blocking.cpp https://github.com/odygrd/quill/blob/master/examples/example_bounded_queue_message_dropping.cpp
Let me know if you need anything else further or if there any issues
I recommend using the _LIMIT
macros for logging in a loop, those macros won't push the messages to the queue but drop them instead and will allow you to maintain a smaller queue.
You don't have to use milliseconds
you can use any other duration type from std::chrono
e.g. seconds
etc
For example
for (uint64_t i = 0; i < 10; ++i)
{
LOG_INFO_LIMIT(std::chrono::milliseconds{100}, logger,
"log in a loop with limit 1 message every 100 ms for i {}", i);
}
I noticed there is no LOG_BACKTRACE_LIMIT
but i can add one
For my use case it's important the last entry truly is the last entry whenever I flush my backtrace, so dropping messages like that isn't an option. I'll make sure to change the initial queue capacity though.
Thanks for your help!
I have a backtrace with capacity for a single message. Even so, I get messages that my unbounded queue has needed to grow every so often.
Running a memory search, there appears to be thosands of instances of my message:
From where I stand, this doesn't make sense. Should the old messages not be discarded when we reach capacity? Unless the new message doesn't fit, of course.
Initializing the logger:
Printing messages to the backtrace:
I've tried changing the arguments passed to init_backtrace() and printing messages without formatting. Issue persists. Is this intended behavior? If not, what may be the cause of this?