Closed robcowart closed 6 years ago
I wanted to add that this has been running at one of my customers for 4 days without issue. Event rates peak as high as 2500/sec with 3-5 name lookups per event, and DNS has not been a bottleneck.
It would be good to get this merged, as name resolution is a high-value feature for many logging and network flow related use-cases.
Updated coding style based on feedback. Tested change on a live system prior to commit, and all still works as expected.
When caching features are enabled, queries are forwarded to the name server syncronously. The result being that slow returning queries slow the processing of all data.
To evaluate the effect of these changes I extracted 5 million IP addresses from the logs of a large firewall, which ensures that the mix of IPs is similar to what would be seen in the real-world. I then repeated these 5 million IPs to have a file with 10 million addresses. The data was then processed locally on my Macbook Pro, with a DNS server on the local LAN which forwards the necessary queries to 1.1.1.1 and 8.8.8.8.
To establish a baseline these 10M IPs were passed to Elasticsearch through the following filters:
The result was approx 41000-42000 eps.
Next a
dns
filter was added without caching.Knowing this would be slower I switched to using only the first 100K entries. The results plummet to ~450 eps.
Next caching is enabled...
The results are much more inconsistent as queries that can not be answered by downstream servers block other queries until a timeout occurs.
I then applied this PR and the improvement with the first 100K was obvious. The peak was over 2000 eps.
I then switched back to the 10M record file. Remember, this is the same 5M records twice.
Here you can see how the performance continues to improve as the cache fills. However I was a bit disappointed in the results. From the two spikes, one as high as 40,000 eps, you can see the performance promises to be very good if the query result is already cached. While 10-15,000 eps is a huge improvement over the 450-500 eps we were seeing, when the 5M records are repeated in the second half of the file, I would have expected better results.
I suspected the results were being ejected from the cache as it filled, so I increased the size of the cache and retested:
With plenty of cache for all results in this real-world data sample, the advantage of the results being in the cache is clear.
The original data without any DNS filter was 41,000 eps. With the DNS filter and a warmed up cache, 40,000 eps was possible. This indicates that the additional overhead is minimal.
With this PR I finally feel comfortable that I can use the DNS filter in production deployments without sacrificing ingest performance. This is a huge win for producing user-friendly data!