Closed arunava-basu closed 6 years ago
Hi,
If splunk could not consume the logs fast enough, lost are lost.
You can check this thread https://github.com/cloudfoundry-community/firehose-to-syslog/issues/159 .
Please try the version mentioned on this thread, on the buffer branch which include a buffer to remove to much pressure to Splunk ingestor.
Thanks
Hi,
We have installed the tile 'Splunk Firehose Nozzle for PCF' "https://docs.pivotal.io/partners/splunk/index.html" and it fixed the issue. FYI we have completed a performance testing with 4000 concurrent POST requests and we can see all the 4000 logs insude Splunk GUI.
Regards, Arunava
Nice. I was going to suggest to use the native splunk nozzle.
Hi Team,
Is there anyway to check the hit count that Websocket received and sent to Splunk endpoint from firehose-to-syslog application.
Traffic Controller ---Websocket---> Firehose-to-syslog nozzle ---Websocket---> Splunk endpoint
We have deployed the firehose-to-syslog application with 18 instances each having 1GB of RAM and 1GB of Storage. But when we did some performance testing with 4000 concurrent POST requests from a script, we were not able to see all 4000 logs inside Splunk GUI. We saw only 3689 hits inside Splunk GUI. We need to check whether the issue is from Cloud Foundry's end or from Splunk's end.
During the performance testing: From Application logs
cf logs <app_name>
we have seen all the 4000 logs. From CF Nozzlecf nozzle -f LogMessage |grep <app_guid>
output, we have seen 3998 logs but in Splunk we got only 3689 logs. So there is 7.7% log loss inside Splunk.Could you please help me out on this.
Regards, Arunava Basu