Open excurso opened 3 months ago
Hi excurso, thank you for sending this patch!
I'm delighted that this works for you but I don't think that we should apply this to the main repository. Fabricating data is a bit of a hack. ;P
The proper fix for this would involve figure out why the spikes occur.
@atagar: Why are you closing the PR without any discussion?
No, it is not a hack. A client program should verify the data it receives before processing it further. This currently doesn't happen in the master branch of nyx.
What do you think is more accurate? An average of adjacent values or spikes of several GB/s, where most people do not even have such much bandwidth?
Look at the issue #31524. Nobody was able to solve this on Tor's side for 4 years. It is labeled with "Mystery Bug". Do you expect a change soon? So let the issue of tor be an issue of nyx for another 10 years, hm? Why not take over "the hack", at least as long as the issue on Tor's side is not resolved? It can still be removed later.
Hi excurso. I left Tor 4 years ago so Nyx is unmaintained. However, I love contributors! Hence why I still reply to pull requests like this. ;P
Reopening as you requested. Please talk with Tor's employees if you'd like to further pursue this.
A client program should verify the data it receives before processing it further.
Nyx's merely displays the data that it gets from Tor. There isn't any 'verification' that Nyx can do. Your patch merely replaces excessively high datapoints to work around Tor's bug.
I think that the proper route forward would be to ask David why he closed Tor's ticket for this.
Hi atagar!
OK, I got it!
Your patch merely replaces excessively high datapoints to work around Tor's bug.
Yes
There isn't any 'verification' that Nyx can do.
There's already a verification in my fix. Namely whether a value is within the bandwidth burst limit. If it is not, you know it is invalid and can limit it to bandwidth burst at least. But to replace it with the average value is a more accurate solution, I think.
Sure, it would be better to have a fix for this in Tor. I would look into this there, but have currently not the time for long debugging sessions. The "Mystery Bug" label implies such a long session... Later maybe... Because of this the quick fix in nyx.
Maybe it is the conversion between signed and unsigned int/long that causes the problem in Tor. When an signed int gets negative and then gets converted to an unsigned int, one might get such high unsigned values. But I have not verified it yet...
I have traced the root cause of this issue.
stats_n_bytes_read/write
stats_n_bytes_read/write
value from the current one, then the old cache entry is popped and the latest value is pushed to the front.This mechanism is what is causing graph spikes, when a Nyx disconnects then later reconnects to Tor, the latest bandwidth cache entry will be all the data read/written since last reading.
I see three options for addressing this:
I plan on posting this response here as well (waiting on account approval), as this is not a "Mystery Bug". This way we can see what Tor thinks about this behavior, and get confirmation on whether it was intentionally designed this way.
Fixed the issue with bandwidth graphs showing spikes at startup, which, because of the high value of the spike, made the rest of the values invisible for some time. Now the average value of the adjacent values are replacing the spike value, if the spike value is greater than the bandwidth burst value.