Closed otterley closed 7 years ago
Just realized this naive fix won't work because the write() function looks like:
if writer, err = dest.Open(group, stream); err != nil {
logDropBatch(dest.name, group, stream, err, batch)
return
}
defer writer.Close()
So the connection wouldn't be able to be reused anyway at the moment.
@achille-roussel Can you explain why write()
does an Open()/Close()
cycle each time? I could hack up Close()
in the syslog library not to actually close the socket, but that smells funny to me.
Open/Close are called to acquire and release a writer for a batch of messages, the main application code doesn't care whether this actually opens or closes connections. The reason I initially implemented to open a connect for each batch is that it makes error handling much simpler than if connections were being reused, and the implementation is also simpler since there is no need to maintain state, synchronize concurrent executions, etc... Now this was working fine at the scale we were last year, not so much anymore today.
@achille-roussel @dfuentes updated -- thoughts?
Each connection must be captured by the writer that owns it, so that no other writer can concurrently write to that connection, otherwise you'd end up with interleaved syslog messages that cannot be parsed by the receiver.
I'm not so sure about this. At least at the OS level, write(2) is an atomic operation, so as long as the complete message is copied, there's no need to worry about messages getting intermingled.
Yes, but here we're using a buffered writer which doesn't give us guarantees about the fact that each syslog line will result in a single syscall. On top of that, the Go runtime passes file descriptors in non-blocking mode and uses an event loop to emulate the blocking behavior on goroutines (because all of this has to be performed in user-space), and nothing in the documentation of io.Writer/net.Conn/net.TCPConn mention the behavior of Write under concurrent access, so I wouldn't make any assumptions.
Thanks for pointing that out. I'll keep hacking on it.
I'd like to punt on the more complex approach for the moment, since it's not clear just yet that we need concurrent syslog server connections. Before we go down that road, I'd like to try serializing writes and possibly adding a (limited) memory buffer, since I think it'll give us a lot of bang for the buck (if temporary).
What do you think of the latest changes?
@achille-roussel @dfuentes how about another go?
Added tests and fixed bugs per your suggestion @achille-roussel. go test -race
passes.
@achille-roussel Thanks for your patience here. Still getting used to Go concurrency primitives...
I replaced all that code with a sync.Map
based implementation (as @dfuentes originally hinted at) and I believe the code is cleaner and correct now. WDYT?
ecs-logs was opening a new syslog TCP connection for every message batch. This had many undesirable downstream resource utilization impacts (among them, port exhaustion on our SOCKS proxies) and is arguably unnecessary anyway. Also, opening a new connection each time causes the TCP session (either from the SOCKS proxy, or directly from the client) not to be able to take advantage of window scaling.
With this change, we now keep each syslog connection in a cache (keyed by destination) and reuse it if we can.
NOTE: To keep the change simple, I'm not going so far as to add connection pooling right now. If we can't get adequate throughput with a single TCP connection (which should increase as window scaling kicks in), then let's revisit.