First of all, we have been using the splunk-firehose-nozzle (sfn) for many years and are very happy with it.
Since we are getting ever more data being sent to our Splunk environment, we noticed very unequal loads on the heavy forwarders.
Our Splunk setup consists of 6 HFs with an Azure Load Balancer in front.
Because the sfn tries to keep the connection to splunk open (less or more) forever, the Azure Load Balancer doesn't get a chance to properly balance the load to the HFs.
This is due to this draining the response body all the time.
This PR adds a (hardcoded) interval where it will close the connection every 5 seconds.
We have tested this in all our environments, including production and it solves the issue, we see new TCP connections every 5 seconds, and the load is now nicely equally distributed over all HFs.
For simplicity I choose a hardcoded 5 second interval, we can also opt for :
First of all, we have been using the splunk-firehose-nozzle (sfn) for many years and are very happy with it. Since we are getting ever more data being sent to our Splunk environment, we noticed very unequal loads on the heavy forwarders. Our Splunk setup consists of 6 HFs with an Azure Load Balancer in front. Because the sfn tries to keep the connection to splunk open (less or more) forever, the Azure Load Balancer doesn't get a chance to properly balance the load to the HFs. This is due to this draining the response body all the time.
This PR adds a (hardcoded) interval where it will close the connection every 5 seconds. We have tested this in all our environments, including production and it solves the issue, we see new TCP connections every 5 seconds, and the load is now nicely equally distributed over all HFs.
For simplicity I choose a hardcoded 5 second interval, we can also opt for :
Please advise...