See updated changelog file and kytos.conf.template for more specific information
The default config should works well out of the gate for our current kytos-ng NApps and AmLight's scalability network like (I simulated with 300 EVCs + a few link flaps)
Notice that with queue monitors the major goal is to detect high queuing usage over a delta t in seconds sampled each second, so we're not trying to have extremely granular visibility (telemetry like), but just to start detecting when on a per second scale if any queues of event buffers or the max workers of thread pools need to either be increased or if a NApp might be misbehaving and sending way too many events.
Local Tests
I tested the default config with 300 EVCs with some link flap and no warnings showed up as expected
I also explored three configs that will be described below, while also injecting a hundreds of concurrent events targeting a slow-ish handler to simulate a case where the queue of a thread pool would keep increasing significantly
This (temporary) config can be useful when you just want to see if every second if there's at least 1 event being queued, this can be useful to give you an idea of how busy the queues are in a local stress test scenario for instance, which can help you to identify certain base line usage and/or spiky queue usage loads in a particular case:
Closes #439
Summary
kytos.conf.template
for more specific informationkytos-ng
NApps and AmLight's scalability network like (I simulated with 300 EVCs + a few link flaps)Local Tests
Config a (default)
Config b
Config c
This (temporary) config can be useful when you just want to see if every second if there's at least 1 event being queued, this can be useful to give you an idea of how busy the queues are in a local stress test scenario for instance, which can help you to identify certain base line usage and/or spiky queue usage loads in a particular case:
End-to-End Tests