Closed dev1z closed 2 years ago
Thank you for the heads-up. I've implemented a fix for this in the coming release: https://github.com/torrust/torrust-tracker/commit/aac9ac72bdd300ffb2959ba6f2a1ee04ecac5895
Thank you @WarmBeer. I have 2 questsions:
1# Will this implementation also work for udp scrape?
2# Any chance to have an explanation about peer_timeout
config variable. Is this deleting peers after 900 seconds if the peer returns event=stopped
?
1# No this was a fix for HTTP only, I'm looking into UDP atm.
2# The peer_timeout variable is the max amount of seconds a peer may be inactive (after last announce) before it is removed by the tracker. This is for peers that stopped announcing, but never sent a stopped event. Peers that send a stopped event get removed instantly.
1# Great one, being able to use udp to scrape is also needed as most public trackers need to have it.
2# I understand. What is the best way to use torrust as private tracker to track user ratio? Is there some way to insert into database peers via api or something (that would also need to import event=stopped peers as well)
I let @WarmBeer look into this, but UDP works fine as far as I'm aware on the development branch. Right now, I'm making a total rewrite of the core by switching from WARP to the faster and better performing ACTIX framework. What is exactly the problem with the UDP scraping if I may ask, you got some wireshark or debugging information on which we can debug it on ?
Seems like I had some issues when testing on windows. Works fine when scraping udp on ubuntu.
Not an issue with tracker.
@WarmBeer Ok, this definitely is not fixed. I just found out udp scraping works on < 2.2.0 version, everything >= 2.2.0 returns invalid scraping response on udp.
Tested locally and scraped with https://github.com/medariox/scrapeer and https://github.com/Novik/ruTorrent
I've tested the UDP scraping with https://pypi.org/project/tracker-scraper/ and the result was valid and as expected, so I'm not sure why it doesn't work in your case. Are you sure UDP is enabled on the tracker and does it run in Public mode?
@WarmBeer Ok, this definitely is not fixed. I just found out udp scraping works on < 2.2.0 version, everything >= 2.2.0 returns invalid scraping response on udp.
Tested locally and scraped with https://github.com/medariox/scrapeer and https://github.com/Novik/ruTorrent
Hi @dev1z, Instead of using an external scraper, I'm trying to reproduce the error with an integration test in this PR.
It would be nice if you could tell me the UDP request and response. And why the response is invalid.
We could also change the workflow to run tests on different OSs in case your error happens only on Windows.
Could you tell me if Gbitt still shows UDP scrape errors ? It's build from scratch based on Torrust-Tracker, but if that works fine, I could look into and find the possible issue, perhaps fix it as well.
Could you tell me if Gbitt still shows UDP scrape errors ? It's build from scratch based on Torrust-Tracker, but if that works fine, I could look into and find the possible issue, perhaps fix it as well.
hi @Power2All, I'm working on a PR in which I want to add some integration tests.
I have not yet added a test for the "scrape" request because I've had concurrency problems. @WarmBeer helped me to fix them.
I run the test with a local UDP server, but you can easily change the code to use an external UDP public server. I've done a test with https://www.gbitt.info/ on this branch:
https://github.com/torrust/torrust-tracker/tree/gbitt-udp-e2e-tests
The test is currently failing on GitHub actions because you need to change the IP of the machine where you are executing the tests (here).
If I run it locally with cargo test udp_tracker_server -- --nocapture
it works:
Running tests/udp.rs (target/debug/deps/udp-4f28f2aeaf3e9bac)
running 3 tests
test udp_tracker_server::should_return_a_connect_response_when_the_client_sends_a_connection_request ... ok
test udp_tracker_server::should_return_a_bad_request_response_when_the_client_sends_an_empty_request ... ok
test udp_tracker_server::should_return_an_announce_response_when_the_client_sends_an_announce_request ... ok
test result: ok. 3 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out; finished in 0.13s
I will upgrade the branch as soon as I finish the "scrape" test.
In the long term, creating a new binary to run smoke tests against public-running UDP trackers could be useful. You could run those smoke tests on your CI/CD to test the UDP tracker after a deployment.
hi @Power2All I've updated the gbitt-udp-e2e-tests branch with the "scrape" test. The basic behaviour for scrapping seems to work.
running 4 tests
test udp_tracker_server::should_return_a_connect_response_when_the_client_sends_a_connection_request ... ok
test udp_tracker_server::should_return_a_bad_request_response_when_the_client_sends_an_empty_request ... ok
test udp_tracker_server::should_return_an_announce_response_when_the_client_sends_an_announce_request ... ok
Response: Scrape(ScrapeResponse { transaction_id: TransactionId(123), torrent_stats: [TorrentScrapeStatistics { seeders: NumberOfPeers(1), completed: NumberOfDownloads(0), leechers: NumberOfPeers(0) }] })
test udp_tracker_server::should_return_a_scrape_response_when_the_client_sends_a_scrape_request ... ok
test result: ok. 4 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out; finished in 7.28s
I'm also using qBittorrent for tests. I've added the tracker with the IP (like in the tests) and the domain. I git this error when using the domain:
@josecelano That's really weird. Unless the domain is not resolving, somehow. Could you try in a command prompt:
nslookup tracker-udp.gbitt.info
and send me the response ?
Thanks !
@josecelano That's really weird. Unless the domain is not resolving, somehow. Could you try in a command prompt:
nslookup tracker-udp.gbitt.info
and send me the response ? Thanks !
$ nslookup tracker-udp.gbitt.info
Server: 8.8.8.8
Address: 8.8.8.8#53
Non-authoritative answer:
Name: tracker-udp.gbitt.info
Address: 109.72.83.209
Name: tracker-udp.gbitt.info
Address: 2a00:f10:10b::1209
But it's working again. Although it seems now it does not work with the IP:
@josecelano I found the problem. Fixing UDP handler to spawn a thread to deal with UDP handler, it's not doing that and it's taking too long sometimes.
@Power2All please, let us know when you fix it so I can also fix it in this repo.
On the other hand, I suppose that error is not related to this issue, isn't it?
@josecelano Correct, this is for my own project. Torrust-Tracker is getting some rewrite on the UDP stack, what I heard from @WarmBeer . This fix I applied is for my Axum variant of the Torrust Tracker.
[Edit] Yep, it seems to be fully fixed. Giving the handler a thread, speeds up a ton, and no lagging UDP connection.
@josecelano Alright, after investigating and using Wireshark, it seems my tracker, even with and without the fix I applied, works flawless. The problem is, qBittorrent is having sometimes issues, and is very slow, sometimes too slow, to handle the returned UDP data back, and it says "updating", and sometimes timing out, for no reason, even though it received back the return packet properly. After I disabled IPv6 on qBittorrent, it was fast, so I'm sure it tries to do both IPv4 and IPv6 for no good reason. Going to test it out in picotorrent, see if it has issues there too.
[Edit] I think I was right, Picotorrent has no issues whatsoever with the UDP tracker. This is entirely a libtorrent/qBittorrent issue. I also determined it has something to do with IPv6.
@Power2All If reasonably possible can you please create the appropioate issue for qBittorrent or libbittorrent and link it here for tracking purposes?
@Power2All If reasonably possible can you please create the appropioate issue for qBittorrent or libbittorrent and link it here for tracking purposes?
Hello, there seems to be an error with packed data when scraping announce both udp and http/s.
The format should be
d5:filesd20
, but on gbitt and torrust it is random, sometimes it isd5:filesd50
,d5:filesd54
, etc. Thats why almost all scrapers returns invalid data for info hash.Seems like it is using aquatic source for scraping, but aquatic returns correct bytes of 20, where torrust doesn't.
@Power2All @WarmBeer