Open Soulou opened 5 years ago
What's tricky is that sometimes it fails in the read_loop
and sometimes in the write_loop
W, [2019-01-16T20:20:19.190486 #2132] WARN -- : [host: 127.0.0.1 port: 4150] Died from: No data from socket
W, [2019-01-16T20:20:19.191380 #2132] WARN -- : [host: 127.0.0.1 port: 4150] Failed to connect: Connection reset by peer. Retrying in 0.5 seconds.
W, [2019-01-16T20:20:19.691925 #2132] WARN -- : [host: 127.0.0.1 port: 4150] Failed to connect: Connection refused - connect(2) for "127.0.0.1" port 4150. Retrying in 0.6 seconds.
W, [2019-01-16T20:20:20.317710 #2132] WARN -- : [host: 127.0.0.1 port: 4150] Failed to connect: Connection reset by peer. Retrying in 1.9 seconds.
# Restart
E, [2019-01-16T20:20:26.819594 #2132] ERROR -- : [host: 127.0.0.1 port: 4150] Error received: E_PUB_FAILED PUB failed exiting
W, [2019-01-16T20:20:26.847915 #2132] WARN -- : [host: 127.0.0.1 port: 4150] Died from: No data from socket
W, [2019-01-16T20:20:26.849932 #2132] WARN -- : [host: 127.0.0.1 port: 4150] Failed to connect: Connection reset by peer. Retrying in 0.5 seconds.
W, [2019-01-16T20:20:27.350496 #2132] WARN -- : [host: 127.0.0.1 port: 4150] Failed to connect: Connection refused - connect(2) for "127.0.0.1" port 4150. Retrying in 1.0 seconds.
# Restart
W, [2019-01-16T20:20:32.791497 #2132] WARN -- : [host: 127.0.0.1 port: 4150] Died from: No data from socket
W, [2019-01-16T20:20:32.793216 #2132] WARN -- : [host: 127.0.0.1 port: 4150] Failed to connect: Connection reset by peer. Retrying in 0.5 seconds.
W, [2019-01-16T20:20:33.293557 #2132] WARN -- : [host: 127.0.0.1 port: 4150] Failed to connect: Connection refused - connect(2) for "127.0.0.1" port 4150. Retrying in 1.0 seconds.
So after further analysis of other clients, in other languages, it's normal to "fail" producing a message, but in most libs, there is a way to get the error and so retry. Here it's completely silent, that's the main point.
If you know how to solve that and give me a few hint, I'll work on it.
You're right that nsq-ruby
doesn't wait for the acknowledgement from nsqd
like in some other client libraries. From the README:
Note: We don't wait for nsqd to acknowledge our writes. As a result, if the connection to nsqd fails, you can lose messages. This is acceptable for our use cases, mostly because we are sending messages to a local nsqd instance and failure is very rare.
This was done for simplicity and performance reasons but might not be acceptable for all use cases.
And like you noticed, the retry mechanism isn't perfect because the first few messages that are sent after nsqd
is down might get sent without an exception being raised (see: https://github.com/wistia/nsq-ruby/blob/master/spec/lib/nsq/producer_spec.rb#L100).
I haven't really thought about how it would look to add a synchronous writes (where we wait for an OK
from nsqd), but if you want to take a crack at implementing something there, I'd be happy to take a look.
I'm sorry this has caused you problems!
Obviously I didn't notice the warning in the README. Sorry for my reaction in this case I should have been careful. By any chance, would you have an idea about how you would implement this if you would have to do it?
I've started a PR (work in progress) to take into account errors when data are sent to NSQ.
Basically we were loosing messages in our infrastructure, this is the result of my investigation.
I did a PoC producer writing one message every second, and one consumer consuming these messages, and during the process I was restarting the nsqd instance.
The result was pretty straightforward, here are the consumer logs:
Here are the logs of the producer:
We can see that the job 4 has never been consumed, and after a short investigation we got it that it has never been emitted. The reason is probably in
connection.rb
So it never gets to the requeue exception it fails at the next message with
break if data == :stop_write_loop
I've done a small patch with a 'memory' and it works correctly, but I'm sure it's not the best solution (actually according to when it's failing, sometimes the message is handled and sometimes not, how can we check that?):
Could I have your input on this, it really needs to be fixed, it's unacceptable to loose messages like this.
Thanks a lot (ping @bschwartz )