Closed sklarsa closed 1 year ago
newSender.WriteRaw([]byte(s.Messages()))
isn't a correct approach as the data is written to the socket in a loop: https://github.com/questdb/go-questdb-client/blob/bb06b09092b9923ad8a7cee304a80092d2dd2e0c/sender.go#L791-L795
So, the Messages
method should at least consider lastMsgPos
when returning the remaining portion of the buffer. But this also means that the remaining portion of the buffer may contain an incomplete ILP message at its start, e.g.:
<written_to_socket>readings,city=London,make=Omron tempe</written_to_socket><to_be_written>rature=23.5,humidity=0.343 1465839830100400000\n
readings,city=Bristol,make=Honeywell temperature=23.2,humidity=0.443 1465839830100600000\n
readings,city=London,make=Omron temperature=23.6,humidity=0.348 1465839830100700000\n</to_be_written>
Tags here are used for illustration purposes.
So, overall I'm not sure if this approach is good enough. A proper solution would require tracking the boundaries of all messages in the buffer, but I'm not sure if we want this complexity.
Thanks for the feedback. The main goal of this is to improve resiliency of the ILP connections, and this seemed like a low-hanging way to do it. But I clearly underestimated the complexity of the problem :-)
I'm wrapping my
LineSender
in aWriter
struct that contains the a pointer to the sender, the context, and ilpOpts used to reconstruct it. This way, in case of an error in.At()
or.Flush()
, I can build a new LineSender, copy the buffer from the old sender to the new one, and swap the pointer value to the new Sender.But to do this would require direct access to the underlying buffer. I don't think it's great to expose this level of internal struct mechanisms, but my idea would work if I could just write directly to the buffer (using an escape-hatch of sorts).
Enter the
LineSender.WriteRaw()
function, which allows a user to write directly to a LineSender's buffer.The obvious downside is that there's no error-checking. But maybe if we clearly document its usage (like the sync.Mutex's TryLock func, it's good enough to keep around?
This is a bit of a workaround, and not a full solution for issues like #10 and #9, but maybe it could enable future development of a connection pool/manager class?