Closed thebracket closed 6 months ago
Note that this is in draft and should stay that way until I've tweaked the map size and added some negative caching. The negative caching in particular should make a (relatively, we're still talking microseconds) huge difference because they are the most expensive LPM lookup.
Ok, negative cache and reasonable size limits are in place. Robert tested and reports a 30% reduction in CPU usage to maintain the same bandwidth levels. So I'm calling this one a win. :-)
Just confirmed that this works with on-a-stick mode (on Payne) also. :-)
Maaaaan, awesome performance impact! This brings me to mind-state: "I can no wait to try". In Poland we said "bathed in hot water".
I will get performance graphs presenting before and after update to v1.5 as soon as it could be possible.
Maaaaan, awesome performance impact! This brings me to mind-state: "I can no wait to try". In Poland we said "bathed in hot water".
I will get performance graphs presenting before and after update to v1.5 as soon as it could be possible.
Please do! We can’t wait
Small bug: Web UI throughput graph locks up with this branch
Small bug: Web UI throughput graph locks up with this branch
Scale is also improper.
For the scale fix, it's pretty funny. The Plotly docs say to specify exponentformat: "Si"
. We tried that over and over before and it doesn't work. It turns out that using SI
(caps) does work. Bad documentation, no donut.
Small optimization for well-behaved setups that are correctly mapping their IP addresses (and a tiny slowdown for IPs that aren't). Does nothing at all for "on a stick" configurations:
TRACING
define to the XDP/TC kernels, allowing logging totrace_pipe
of execution times. Not very accurate.HOT_CACHE
define tolpm.h
, setting this enables the hot cache.My limited testing indicated a decent speed-up per-packet, although I don't trust the nanosecond measurements being emitted from the kernel - I think the clock doesn't update often enough.