s-rah / onionscan

OnionScan is a free and open source tool for investigating the Dark Web.
https://twitter.com/OnionScan
Other
2.86k stars 590 forks source link

Better Timeout Policies #65

Open s-rah opened 8 years ago

s-rah commented 8 years ago

Some of the new improvements e.g. spider/ and bitcoin changes have dramatically increased the timing expectations for certain sites. For example scanning for onion peers in bitcoin takes a rather long time and a user configuring that and a small timeout should likely be warned that it is a bad idea.

On top of that, we need to put some thought into why timeouts exist and how they can be helpful. Some thoughts:

laanwj commented 8 years ago

As said above, some protocol scans are really slow and can potentially contradict and confuse user specified timeouts.

Yes, things are becoming slow by default, I'm partially to blame for that :) The reason the various bitcoin scans are so slow (when they connect) is that a node can take significant time to reply to a getaddr message, as addr notifications are queued up and handled periodically. Even the currently hardcoded 30 seconds deadline is sometimes not enough to get the result. It would make sense to make that configurable, or depend on some global setting.

laanwj commented 8 years ago

If the first protocol scan succeeds we probably want to ignore timeouts

Another potential idea would be to distinguish the error codes returned by the proxy. Tried out a bit:

Oct 05 15:25:32.000 [notice] Tried for 120 seconds to get a connection to [scrubbed]:8333. Giving up. (waiting for rendezvous desc)

In which case a timeout would likely trigger sooner. But at least "connection refused" could be used as a flag to know for sure that the port was closed and it was not some transient Tor issue.

For golang's socks implementation these are mapped to errors here.