pelikan-io / pelikan

Pelikan is a framework for building local or distributed caches. It comes with a highly extensible architecture, best-in-class performance, and superb operational ergonomics. You can use it to replace most of Memcached or a subset of Redis features.
https://pelikan.io
Apache License 2.0
227 stars 19 forks source link

Add log message when hash power is too low #29

Open brayniac opened 1 year ago

brayniac commented 1 year ago

Add a log message that helps users determine that the hash power is too low. It's easy to have a configuration issue there and it would be nice to suggest that they increase the hash power. We might want to log these messages only once per run, so we'd need to think about that a bit.

Other related configuration issues might be that the segment size is too small (items not fitting into segments).

thinkingfish commented 1 year ago

I wonder if the grander idea is to slowly build out a "config diagnostics console" that eventually can evolve into an autotuner.

E.g. hashtable load factor is the metric behind the problem in the issue title. For segment size, we want a max object size to segment size ratio, or internal fragmentation %.

There are generally two ways to approach this objective, one is self-contained, such as codify in Pelikan some intelligence that runs as a little state machine (or ML agent if we want to sound trendy) that "scores" the main configuration values based on the metric they are responsible for, like the few mentioned here. The other way is to outsource that work to an external entity, and simply curate a stream of events/logs to provide as data. In both cases though, I suspect the trigger and frequency of the internal action(s) will be somewhat independent of debug logging.

Given we have a very generic and flexible logging backend, we can potentially create a new log type to support this functionality, and gate the logging differently too (e.g. evaluate and/or log hash table load factor when we have to allocate a new hash bucket as overflow, only calculate internal fragmentation when the most recent write wasted more than X% of segment space) to keep it very lightweighted.

brayniac commented 1 year ago

That'd be a big improvement. I wonder if in the interim we should just adjust some of our default values. Maybe making those values match what we currently have in the example config? The current default is hash_power = 16 with an overflow factor of 1.0 - so effectively we have only 114688 item slots if launched without a config file.

I guess as an alternative, we could make the config file be a mandatory argument.

My biggest immediate concern is that the "no config provided" defaults are so conservative that it's easy to run into problems.

thinkingfish commented 1 year ago

Agree on improving the current default. I think lacking a config file, I will probably base default values on a presumed average key-value size, e.g. 1KB (we can add an internal constant called TARGET_OBJECT_SIZE. So if people ask for 4GB of data memory, we assume we will have 4 million objects.

Related (but no action needed now), we have target size for read and write buffers. The current value (16K) agrees with the 1KB object size with moderate pipelining, but we can eventually provide a calculator and config generator that produces a config that sets multiple parameters based on a few key assumptions, such as object size, object life cycle (creation rate and desirable TTL), concurrency level, etc that map closer to users' mental model of caching.