Open reardonia opened 2 weeks ago
(1) and (2) looks to be the same argument worded differently. Regardless, I agree with them.
(3) is a valid problem that needs to be addressed.
(4) is unrelated, tr_peerIo::is_seed_
stores whether the client is a seed, instead of the peer.
(5) Is there a problem with that?
(6) It looks like you didn't finish typing.
I also found an inconsistency in the process to find peer candidates.
On the swarm level, Tr is allowed to connect to seeds even if it itself is a seed, as long as PEX is allowed.
But then later on, no connections will be made because of this check, which is a subset of the previous check.
(6) It looks like you didn't finish typing.
Whoops... meant the swarm_is_all_seeds logic.
What is the issue?
(copy from convo at #6909)
Detecting and remembering seeds is a bit messy at the moment. Apologies if this reads a bit stream-of-consciousness, just trying to get issue logged asap:
(1) If we detect a seed via the "have_all" message, we go down one path, with an observer/emit protocol. This doesn't seem to do anything other than set the bitfield hint. No "seed" state is ever changed. a set_have_all() is done, but we don't use that to check is_seed.... mostly. (2) If we detect a seed via "upload_only", we just set ADDED_F_SEED_FLAG in the pex object. Doesn't look like we do that in response to "have_all". we mix SEED_FLAG usage with is_seed() in various places, inconsistently. (3) the pex object that is constructed in parse_ltep_handshake is just thrown away if the peer did not send us a legit ipv4/ipv6 address. so we never do anything with the upload_only message. We should at least publish() an event similar to the GotHaveAll() published in process_peer_message? (4) then there is peerIo::is_seed, which has a getter but no setter. isseed is set on instantiation and stuck. (5) the abstract class tr_peer has no local seed state and instead uses the has().has_all() method against the bitfield object.
(6) within peer-mgr, there is another is_seed handler dedicated to bulk-setting peers as seeds based on announce&scrape results. but it does not use the same fields as
Seems like they should all be handled the same way, with explicit seed/upload_only status separate from bitfield, and it should be maintained consistently across object lifetimes. Currently, none of the seed discovery is propogated back to the connectable_pool (aka the atom list) except in the case of incoming connection that gave us a full LTEP with ipv4+port update. Transmission may be the only client that actually sends all that; qBT certainly doesn't.
In addition, we have the problem (related to https://github.com/transmission/transmission/issues/6910 ) where a seed is never removed from the candidate atom list. Instead, we just keep reconnecting to it every 2 seconds...which I guess is better than every 1 second.
AFAICT, the only way to be skipped in the candidate atom list is to be mark_peer_as_seed, via the swarm_is_all_seeds logic that applies to private trackers.
this following is a hack, but gives an idea towards resolving the propagation of seed-status to connectable_pool. it solves the problem for peers that have sent us "have_all" BT messages, though it doesn't handle the case of "upload_only" messages.
in peer-mgr:
Which application of Transmission?
transmission-daemon
Which version of Transmission?
Tr4 mainline, possibly 4.0.x