When configuring a GELF output stream, the current transport options are TCP or UDP. Please consider adding an HTTP transport option. I know a custom plugin could be used to achieve this, but I think it could be generally useful, and suspect it would be less work to extend the existing gelf-client code.
A big advantage would be a stronger guarantee of output delivery. Even with TCP, if the connection drops, you don't know how much data was stuck in buffers and hadn't been processed by the destination. With HTTP, requests that receive an error or time out could be retried. It's not as good as using a message queuing protocol, but it's also a lot simpler.
Use case: in a cloud environment, we have an auto-scaling group of graylog2-servers that can grow to match demand. The nodes have a global output that directs a subset of the logged traffic to another destination. Ideally that destination could also auto-scale based on demand, but this does not fit well with the current TCP or UDP options. With HTTP, a load balancer can be used to distribute the output evenly, making scaling much easier.
graylog2-server 0.92.3
When configuring a GELF output stream, the current transport options are TCP or UDP. Please consider adding an HTTP transport option. I know a custom plugin could be used to achieve this, but I think it could be generally useful, and suspect it would be less work to extend the existing gelf-client code.
A big advantage would be a stronger guarantee of output delivery. Even with TCP, if the connection drops, you don't know how much data was stuck in buffers and hadn't been processed by the destination. With HTTP, requests that receive an error or time out could be retried. It's not as good as using a message queuing protocol, but it's also a lot simpler.
Use case: in a cloud environment, we have an auto-scaling group of graylog2-servers that can grow to match demand. The nodes have a global output that directs a subset of the logged traffic to another destination. Ideally that destination could also auto-scale based on demand, but this does not fit well with the current TCP or UDP options. With HTTP, a load balancer can be used to distribute the output evenly, making scaling much easier.