greatest-ape / aquatic

High-performance open BitTorrent tracker (UDP, HTTP, WebTorrent)
Apache License 2.0
478 stars 33 forks source link

bencher: add new tracker, torrust-tracker #191

Closed josecelano closed 5 months ago

josecelano commented 6 months ago

Relates to: https://github.com/greatest-ape/aquatic/issues/190#issuecomment-1999594516

Adds a new tracker to the bencher: https://github.com/torrust/torrust-tracker

The bencher runs load tests with 3 BitTorrent UDP tracker:

This PR adds the Torrust Tracker.

To run the bencher you need to:

How to Setup opentracker

You can make following the official documentation or sudo apt install opentracker for Ubuntu.

How to Setup chihava

Follow the documentation. You need to install Go (sudo apt install golang-go).

How to Setup torrust-tracker

Build:

git clone git@github.com:torrust/torrust-tracker.git
cd torrust-tracker
cargo build --release 
cp ./target/release/torrust-tracker ~/bin

Run:

TORRUST_TRACKER_PATH_CONFIG="./config.toml" torrust-tracker

Config file:

announce_interval = 120
db_driver = "Sqlite3"
db_path = "./sqlite3.db"
external_ip = "0.0.0.0"
inactive_peer_cleanup_interval = 600
log_level = "error"
max_peer_timeout = 900
min_announce_interval = 120
mode = "public"
on_reverse_proxy = false
persistent_torrent_completed_stat = false
remove_peerless_torrents = false
tracker_usage_statistics = false

[[udp_trackers]]
bind_address = "0.0.0.0:3000"
enabled = true

[[http_trackers]]
bind_address = "0.0.0.0:7070"
enabled = false
ssl_cert_path = ""
ssl_enabled = false
ssl_key_path = ""

[http_api]
bind_address = "127.0.0.1:1212"
enabled = false
ssl_cert_path = ""
ssl_enabled = false
ssl_key_path = ""

[http_api.access_tokens]
admin = "MyAccessToken"

[health_check_api]
bind_address = "127.0.0.1:1313"

How to Setup aquatic

sudo apt update && sudo apt upgrade -y
sudo apt-get install libhwloc-dev
cargo build --profile=release-debug --all-features -p aquatic_udp

NOTICE: libhwloc-dev is needed for io-uring UDP tracker feature

How to Setup and Run the Aquatic UDP load test

It's used by the Bencher.

Build:

cargo build --profile=release-debug -p aquatic_udp_load_test

Run:

./target/release-debug/aquatic_udp_load_test

How to Setup and Run the Bencher

Build:

cargo build --profile=release-debug -p aquatic_bencher

Run

./target/release-debug/aquatic_bencher udp
josecelano commented 6 months ago

Hi @greatest-ape I can run the aquatic UDP load test with the torrust-tracker:

TORRUST_TRACKER_PATH_CONFIG="./config.toml" torrust-tracker
./target/release-debug/aquatic_udp_load_test

but for some reason, it is not working with the bencher. But I have the same problem with other trackers. I get no results for some test cases:

- Average responses per second: 0
- Average tracker CPU utilization: 0%

For example:

$ ./target/release-debug/aquatic_bencher udp
# Benchmark report

Total number of load test runs: 72
Estimated duration: 0 hours, 44 minutes

## Tracker cores: 1 (cpus: 0,16)
### aquatic_udp run (socket workers: 1) (load test workers: 8, cpus: 8-15,24-31)
- Average responses per second: 363,320
- Average tracker CPU utilization: 95.6%
- Peak tracker RSS: 192.2 MiB
### aquatic_udp run (socket workers: 1) (load test workers: 12, cpus: 4-15,20-31)
- Average responses per second: 388,086
- Average tracker CPU utilization: 95.5%
- Peak tracker RSS: 192.2 MiB
### aquatic_udp (io_uring) run (socket workers: 1) (load test workers: 8, cpus: 8-15,24-31)
- Average responses per second: 392,732
- Average tracker CPU utilization: 95.5%
- Peak tracker RSS: 212.5 MiB
### aquatic_udp (io_uring) run (socket workers: 1) (load test workers: 12, cpus: 4-15,20-31)
- Average responses per second: 431,356
- Average tracker CPU utilization: 95.5%
- Peak tracker RSS: 215.6 MiB
### opentracker run (workers: 0) (load test workers: 8, cpus: 8-15,24-31)
- Average responses per second: 0
- Average tracker CPU utilization: 0%
- Peak tracker RSS: 1.9 MiB
### opentracker run (workers: 0) (load test workers: 12, cpus: 4-15,20-31)
- Average responses per second: 0
- Average tracker CPU utilization: 0%
- Peak tracker RSS: 1.9 MiB
### opentracker run (workers: 1) (load test workers: 8, cpus: 8-15,24-31)
- Average responses per second: 0
- Average tracker CPU utilization: 0%
- Peak tracker RSS: 1.9 MiB
### opentracker run (workers: 1) (load test workers: 12, cpus: 4-15,20-31)
- Average responses per second: 0
- Average tracker CPU utilization: 0%
- Peak tracker RSS: 1.9 MiB
### chihaya run () (load test workers: 8, cpus: 8-15,24-31)
- Average responses per second: 120,951
- Average tracker CPU utilization: 190%
- Peak tracker RSS: 898.7 MiB
### chihaya run () (load test workers: 12, cpus: 4-15,20-31)
- Average responses per second: 117,691
- Average tracker CPU utilization: 191%
- Peak tracker RSS: 878.1 MiB
### torrust-tracker run () (load test workers: 8, cpus: 8-15,24-31)
- Average responses per second: 0
- Average tracker CPU utilization: 0%
- Peak tracker RSS: 0 B
### torrust-tracker run () (load test workers: 12, cpus: 4-15,20-31)
- Average responses per second: 0
- Average tracker CPU utilization: 0%
- Peak tracker RSS: 0 B
## Tracker cores: 2 (cpus: 0-1,16-17)
### aquatic_udp run (socket workers: 2) (load test workers: 8, cpus: 8-15,24-31)

...

Maybe the torrust-tracker is not started correctly from the bencher. I'm trying to find out the problem.

josecelano commented 6 months ago

Hey @greatest-ape I found the problem. The order of the command arguments was wrong:

I changed this:

Ok(Command::new("taskset")
    .env("TORRUST_TRACKER_PATH_CONFIG", tmp_file.path())
    .arg("--cpu-list")
    .arg(vcpus.as_cpu_list())
    .arg(&command.torrust_tracker)
    .stdout(Stdio::piped())
    .stderr(Stdio::piped())
    .spawn()?)

to this:

Ok(Command::new("taskset")
    .arg("--cpu-list")
    .arg(vcpus.as_cpu_list())
    .env("TORRUST_TRACKER_PATH_CONFIG", tmp_file.path())
    .arg(&command.torrust_tracker)
    .stdout(Stdio::piped())
    .stderr(Stdio::piped())
    .spawn()?)

To be honest. I do not know why. But now I'm getting results for the torrust-tracker:

### torrust-tracker run () (load test workers: 8, cpus: 8-15,24-31)
- Average responses per second: 228,345
- Average tracker CPU utilization: 189%
- Peak tracker RSS: 193.3 MiB
### torrust-tracker run () (load test workers: 12, cpus: 4-15,20-31)
- Average responses per second: 223,238
- Average tracker CPU utilization: 189%
- Peak tracker RSS: 191.6 MiB

Regarding the opentracker maybe I'm also using an old version not compatible with the options you are using.

josecelano commented 6 months ago

Hi @greatest-ape I've written a new blog post in the Torrust site (https://torrust.com/) explaining how we use the Aquatic benchmarking tools with the Torrust Tracker (not published yet).

I have a question regarding the Bencher. I see it uses the UDP load test command, so I'm assuming that all announce requests use a different peer and a peer always uses the same infohash, is that right? If that's the case I would miss a test for many peers trying to announce the same torrent.

josecelano commented 6 months ago

I've tried to run a complete test but I'm getting some errors like:

panic: too many concurrent operations on a single file or socket (max 1048575)

I guess I have to increase some limits. The full output:

2024-03-19-bencher.txt

greatest-ape commented 6 months ago

Nice idea with a blog post!

No, it actually does actually test with more requests for some torrents.

Also, I added a review comment to your code.

Yeah, chihaya tends to crash under heavy load. There is an issue opened about it somewhere, probably in the chihaya or Go repos. In my recollection it is amazingly an actual Go runtime limitation that they didn’t want to fix.

Another thing of interest here is that the default CpuMode gives an unfair advantage to chihaya and torrust since they open one worker per thread, while the benchmark config for aquatic and opentracker opens one per core. You can see this by looking at the average CPU utilization stats for the lower core counts. This could be solved by adding entries to test with double the current worker count too for aquatic/opentracker (with medium priority so that they can be skipped when not needed to save time).

The reason why I haven’t yet is that the current setup is meant to enable somewhat fair testing on virtual machines where hyperthreads don’t really correspond to real hyperthreads, but for that to work, SubsequentOnePerPair mode must be used.

josecelano commented 6 months ago

Hi!

Nice idea with a blog post!

It's published now: https://torrust.com/benchmarking-the-torrust-bittorrent-tracker

No, it actually does actually test with more requests for some torrents.

OK.

Also, I added a review comment to your code.

Do you mean the comment on the issue? I don't see any comment in this PR.

Yeah, chihaya tends to crash under heavy load. There is an issue opened about it somewhere, probably in the chihaya or Go repos. In my recollection it is amazingly an actual Go runtime limitation that they didn’t want to fix.

Another thing of interest here is that the default CpuMode gives an unfair advantage to chihaya and torrust since they open one worker per thread, while the benchmark config for aquatic and opentracker opens one per core. You can see this by looking at the average CPU utilization stats for the lower core counts. This could be solved by adding entries to test with double the current worker count too for aquatic/opentracker (with medium priority so that they can be skipped when not needed to save time).

The reason why I haven’t yet is that the current setup is meant to enable somewhat fair testing on virtual machines where hyperthreads don’t really correspond to real hyperthreads, but for that to work, SubsequentOnePerPair mode must be used.

greatest-ape commented 6 months ago

OK, do you see them now? Otherwise, I'm just wondering, does torrust-tracker set defaults for missing config keys? And in that case, can any be excluded here so that the bencher doesn't need to be updated if they are changed?

Also, could you please state that you provide the code that you're adding under the Apache 2.0 License? :-)

josecelano commented 5 months ago

OK, do you see them now? Otherwise, I'm just wondering, does torrust-tracker set defaults for missing config keys? And in that case, can any be excluded here so that the bencher doesn't need to be updated if they are changed?

I see them now. I've just repleid.

Also, could you please state that you provide the code that you're adding under the Apache 2.0 License? :-)

Yes, all the code in this PR is provided under the Apache 2.0 License.

greatest-ape commented 5 months ago

I don’t see your reply 😀

greatest-ape commented 5 months ago

OK, I checked, its seems like torrust-tracker does not have defaults for individual fields. I'm merging this.

greatest-ape commented 5 months ago

Also, could you please state that you provide the code that you're adding under the Apache 2.0 License? :-)

Yes, all the code in this PR is provided under the Apache 2.0 License.

Excellent.

josecelano commented 5 months ago

I don’t see your reply 😀

https://github.com/greatest-ape/aquatic/pull/191/files#r1527138893

image

greatest-ape commented 5 months ago

Alright :-)

Yeah, I want to add the ability to just run certain trackers at some point.

When you've refactored torrust-tracker configuration, could you please open a new PR for the bencher then? :-)

greatest-ape commented 5 months ago

I've merged some adjustments to make results more fair when running non-virtualized, so you might want to run your benchmarks again.

josecelano commented 5 months ago

Alright :-)

Yeah, I want to add the ability to just run certain trackers at some point.

When you've refactored torrust-tracker configuration, could you please open a new PR for the bencher then? :-)

Sure.

josecelano commented 5 months ago

I've merged some adjustments to make results more fair when running non-virtualized, so you might want to run your benchmarks again.

I will do it, and also again after finishing some improvements we are working on: