Closed rluetzner closed 2 years ago
Actually after spending a bit more time with the GlusterFS code I'm not entirely sure what happens.
The LRU limit is definately clamped when the inode table is created, but after that it seems like the limit can be arbitrarily increased.
I found this piece of code here in xlators/protocol/server/src/server.c
in the function server_reconfigure
:
/* traverse through the xlator graph. For each xlator in the
graph check whether it is a bound_xl or not (bound_xl means
the xlator will have its itable pointer set). If so, then
set the lru limit for the itable.
*/
xlator_foreach(this, xlator_set_inode_lru_limit, &inode_lru_limit);
I don't know enough about GlusterFS to tell if that function is only called when settings change or if it also called when the volume is being initialized. I'll try to clarify this. Depending on the answer my observation might be wrong and the docs are actually correct.
Sorry for the confusion. The following explains what happens.
inode_table_prune
function that checks if the mem_pool has grown beyond the lru_limit and purges the oldest entries. This ensures that the mem_pool respects the configured limit.So, my initial observation is wrong and the docs are actually correct. Everything works as expected. 🙂
The following docs mention settings that are no longer configurable (if they ever were at all):
The docs mention that performance can be improved in some cases by setting
network.inode-lru-limit
to a value of 50,000 or even 200,000.After looking through the GlusterFS code I noticed that this is impossible as the value is clamped between 0 and 32,768. Here's the code excerpt from
libglusterfs/src/inode.c
:I think the documentation should state clearly that the maximum number is 32,768 and setting higher numbers will not yield any additional benefits.
Additionally I think it's worth mentioning that the inode LRU cache cannot be deactivated in any way. This is probably a sane decision, but there is no further documentation available for this setting and other caches can often be disabled. I naively assumed that this might be the case here as well and had to check the GlusterFS code to see for myself.