Closed jhuggart closed 3 years ago
Hi @jhuggart, thank you very much for your time ;) I roughly tested this pr, and notice some things, i will write all them in this post, so we can better keep track of the issues:
When we send an announce request to the tracker, we get the interval(in seconds) as response for the next announces, so the current logic is that for every announce theoretically we have a potential change on the interval rate. But if we have a timer, declared once, this interval rate will never change
func (R *RatioSpoof) updateInterval(resp tracker.TrackerResponse) {
if resp.Interval > 0 {
R.AnnounceInterval = resp.Interval
} else {
R.AnnounceInterval = 1800
}
}
We currently start to print when the length of deque (double ended queue) is >= 1, because it means we fired our first announce successfully the the second entry is on the way, other than that we get the message of: Trying to connect to the tracker...
. So from my tests, we get something like this:
generate first announce > fire first announce > wait interval (the printer only displays "Trying to connect to the tracker..." at this moment, so we lost what is happening ) > fire first Announce again > generate second announce > wait interval > fire second announce ....
And the expected would be something like this:
Generate First Announce > Fire First Announce > generate second announce> wait interval > fire second announce...
I didn't test this part yet, but i think if the tracker is taking some time to respond, we might get a negative time duration here?
nextAnnounceSeconds := time.Until(state.NextAnnounce) * time.Second
We have retry attempts for the second announce onwards, which means that if we lost connection, we still eventually fire the announce, but with the timer being fire and forget, we might have 2 concurrent announces waiting to be announce i think.(We only generate and fire the next announce if the previous announce was successfully)
I have never used logrus
, but it seems really nice lib, but we clean the stdout every loop iteration on the printer, we might want to display the logs when we use the flag --debug and somehow store the logs in memory, what do you think?
1. When we send an announce request to the tracker, we get the interval(in seconds) as response for the next announces, so the current logic is that for every announce theoretically we have a potential change on the interval rate. But if we have a timer, declared once, this interval rate will never change
I see. Does the interval rate need to change? Maybe it needs to be semi-random to help avoid detection. If so, we could use ticker.Reset(duration)
to add some flutter.
2. We currently start to print when the length of deque (double ended queue) is >= 1, because it means we fired our first announce successfully the the second entry is on the way, other than that we get the message of: `Trying to connect to the tracker...`. So from my tests, we get something like this: `generate first announce > fire first announce > wait interval (the printer only displays "Trying to connect to the tracker..." at this moment, so we lost what is happening ) > fire first Announce again > generate second announce > wait interval > fire second announce ....` And the expected would be something like this: `Generate First Announce > Fire First Announce > generate second announce> wait interval > fire second announce...`
Sounds like the bug here is that the printer is not printing the correct state after the first announcement. Is that correct? If so, I bet we can find a way to get the correct state to the printer.
3. I didn't test this part yet, but i think if the tracker is taking some time to respond, we might get a negative time duration here?
nextAnnounceSeconds := time.Until(state.NextAnnounce) * time.Second
That is a great callout. We probably need a guard clause around this. Eventually, I'd like to get to the point where we've reduced shared state and can find a safer way to share this info.
4. We have retry attempts for the second announce onwards, which means that if we lost connection, we still eventually fire the announce, but with the timer being fire and forget, we might have 2 concurrent announces waiting to be announce i think.(We only generate and fire the next announce if the previous announce was successfully)
Another great callout. We'll need to guard against concurrent announcements. A simple mutex should do the tricker here. If we can't acquire a lock, then there's an announcement in progress and we can skip the next attempt(s).
5. I have never used `logrus`, but it seems really nice lib, but we clean the stdout every loop iteration on the printer, we might want to display the logs when we use the flag --debug and somehow store the logs in memory, what do you think?
I generally like to control log levels with env vars. So we could set logrus to only log error
or warn
level by default but allow an override from a LOG_LEVEL
variable. Rather than storing in memory, I would suggest writing to a file so that there is a persistent record of those logs.
I see. Does the interval rate need to change? Maybe it needs to be semi-random to help avoid detection. If so, we could use ticker.Reset(duration) to add some flutter.
This interval is to avoid flooding the tracker, so in order to control the traffic, the tracker can increase the interval at any point, so all the BitTorrent clients needs to follow this rule strictly, and if we want to stay invisible we need to follow it too
Sounds like the bug here is that the printer is not printing the correct state after the first announcement. Is that correct? If so, I bet we can find a way to get the correct state to the printer.
Hm.. I don't think so, because of the nature of the fireAnnounce func, for example when we call fire announce it will fire the the right most element of the deque, and if we inspect the order main loop it does not seem to be compatible with the current expected logic. I think i should have documented a little bit more about the reason for this History (deque) thing, The reason that i use it is because it's a nice way to keep a maximum amount of past announces to be shown on the screen, and the right most entry means that its not fired yet(for this entry we display the minutes/seconds cooldown until fire moment)
func (R *RatioSpoof) Run() {
rand.Seed(time.Now().UnixNano())
sigCh := make(chan os.Signal, 1)
signal.Notify(sigCh, os.Interrupt, syscall.SIGINT, syscall.SIGTERM)
R.firstAnnounce() //Generate and fired the first announce (DEQUE COUNT =1)
duration := time.Duration(R.AnnounceInterval) * time.Second
ticker := time.NewTicker(duration)
defer ticker.Stop()
runLoop := true
for runLoop {
select {
case <-ticker.C: // (DEQUE COUNT =1) Since the deque hasn't got the second item yet the printer still waiting to start
R.NextAnnounce = time.Now().Add(duration)
if err := R.fireAnnounce(true); err != nil { // (DEQUE COUNT =1) so it fires the first entry twice at this point
log.Warn("failed to fire announce", err)
}
R.generateNextAnnounce() // DEQUE COUNT =2) now we get the second announce and things would be back to normal
case <-sigCh:
fmt.Println("done")
runLoop = false
}
}
R.StopPrintCH <- "exit print"
R.gracefullyExit()
}
I generally like to control log levels with env vars. So we could set logrus to only log error or warn level by default but allow an override from a LOG_LEVEL variable. Rather than storing in memory, I would suggest writing to a file so that there is a persistent record of those logs.
This is a super cool idea,i didn't thought about doing it until this point ;) i even created the debug flag in attempt to debug the requests being generated, but logging on a file based on a env variable seems much nicer
Also here's a command that we can take advantage of in order to test it:
go run cmd/main.go -t internal/bencode/torrent_files_test/Fedora-Workstation-Live-x86_64-33.torrent -d 0% -ds 100kbps -u 0% -us 100kbps
@ap-pauloafonso found a cleaner, simpler path to removing manually decrement timers. His solution will be merged instead.
Hey @ap-pauloafonso,
This is what I had in mind for trying to use a ticker instead of manually increment ints and channels. I think this will work. But I have not run locally yet. Take a look and let me know what you think.
I added
logrus
for some prettier logging. Some of the log levels may be off, I just took my best guess.Also, I always work with
golangci-lint
enabled in my IDE. So I patched up linter failures as I came across them. These changes are mostly to make sure errors are not ignored.