Open julianbrost opened 1 month ago
Size in bytes (pointers) | M3 Mac | NixOS x64 laptop |
---|---|---|
sizeof(std::mutex) |
64 (8) | 40 (5) |
alignof(std::mutex) |
8 (1) | 8 (1) |
.ti attributes of a Service: about 90. Ex. numbers and bools: about 30. (icinga2 console > Service()
, then grep -v | wc -l
)
So, every Service consumes +1.2KB (30 x 40) with Locked<>
.
That gives no estimate on the big scale of things, i.e. how this affects the overall memory usage. There are more object types than just Host
and Service
affected by this.
- Figure out how much of an effect this has on the total memory use of Icinga 2.
I've been testing this for the entire week now and couldn't find a way to exactly determine the differences with and without this mutex. Attaching GDB to the running icinga2 process and calling malloc_stats()
was promising, but then we found out that the output is next to useless as it only shows the virtual memory usage and not the actual physically allocated ones. So I just did it with the plain simple htop
command and here is the result:
Setup (Debian 12 (Icinga 2 linked to jemalloc
)):
And at least 1 object for each of the remaining object types, e.g 1 IcingaDB, 2 Endpoints, 16 Notifications etc.
Main Process Memory | Master w/ mutex | Master w/o mutex |
---|---|---|
Resident (RES) | 830M | 690M |
Virtual (VIRT) | 3560M | 1595M |
As a countermeasure for race conditions, #9364 added a mutex for every object attribute with a type that's incompatible with
std::atomic
. At the moment, that's implemented using a dedicatedstd::mutex
for every attribute for every single object. On my machine,sizeof(std::mutex) = 40
, and if I compare thesizeof(icinga::Host)
with and without these mutexes, that's a 70% increase. However, that won't result in a 70% increase in memory usage of Icinga 2 as a whole (for example, all strings like object names are dynamically allocated and thus not part oficinga::Host
itself and aren't affected by this increase.Tasks
Improve this. One idea would be to take some inspiration from how something like
atomic_load(const std::shared_ptr<T>*)
is/can be implemented:Note that if using only part of the address as the key, i.e. sharing the mutex between objects, this would reduce the memory requirements.