Closed jrfondren closed 1 year ago
@jrfondren Thanks! I haven't looked into performance, so there are probably low hanging fruits here.
From what I see, cargo run
, or even cargo check
takes about the same time with rust-script
. So to speed rust-script
up significantly, it'd be necessary to avoid using cargo
in the happy path and invoke the cached binary directly.
Oops sorry, it was actually rustup
taking quite a bit of time to parse some large toml file to find the right cargo
binary to invoke (see: https://www.joshmcguigan.com/blog/rustup-overhead/)
$ hyperfine '/home/misawa/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/bin/cargo run' 'cargo run' -w 10
Benchmark 1: /home/misawa/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/bin/cargo run
Time (mean ± σ): 9.5 ms ± 1.0 ms [User: 7.1 ms, System: 2.8 ms]
Range (min … max): 7.7 ms … 17.7 ms 216 runs
Warning: Statistical outliers were detected. Consider re-running this benchmark on a quiet PC without any interferences from other programs. It might help to use the '--warmup' or '--prepare' options.
Benchmark 2: cargo run
Time (mean ± σ): 81.8 ms ± 4.2 ms [User: 72.7 ms, System: 8.5 ms]
Range (min … max): 76.1 ms … 92.9 ms 34 runs
Summary
'/home/misawa/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/bin/cargo run' ran
8.61 ± 1.04 times faster than 'cargo run'
cargo +stable run
performs good, so a relatively lower hanging fruit would be #32 I guess...
I also worry that whatever the .json writes are for will be broken if there are many concurrent runs of the script.
This part will be mitigated by #49
Running a simple hello world via rust-script
is 19 times as slow as running the executable directly even after repeated reruns. This delay after pressing enter to the script running is very noticeable. One reason to use Rust for scripting is to avoid warm up times, which is defeated by this flaw. Some data I've collected on a Raspberry Pi 4B:
$ time rust-script -o ./hello
Finished release [optimized] target(s) in 0.02s
Running `/home/user/.cache/rust-script/binaries/release/hello_<hash>`
Hello, world!
real 0m0.191s
user 0m0.119s
sys 0m0.072s
$ time cargo run -q --manifest-path /home/user/.cache/rust-script/projects/<hash>/Cargo.toml
--color always --target-dir /home/user/.cache/rust-script/binaries/
Hello, world!
real 0m0.186s
user 0m0.116s
sys 0m0.070s
$ time cargo +stable run -q --manifest-path /home/user/.cache/rust-script/projects/<hash>/Cargo.toml
--color always --target-dir /home/user/.cache/rust-script/binaries/
Hello, world!
real 0m0.066s
user 0m0.023s
sys 0m0.044s
$ time bash -c 'echo Hello, world!'
Hello, world!
real 0m0.014s
user 0m0.000s
sys 0m0.015s
$ time /home/user/.cache/rust-script/binaries/release/hello_<hash>
Hello, world!
real 0m0.010s
user 0m0.005s
sys 0m0.005s
Version 0.23.0 has been released, and avoids cargo on subsequent runs. This speeds up things a lot - let me know how it works for you!
Consider:
On a good laptop the difference isn't so bad, ./h1 is only 30x faster than ./h3.rs. The times are above are on a server with slow HDD disks.
I was excited to see an actively maintained Rust scripting crate, but this overhead is too much. I also worry that whatever the .json writes are for will be broken if there are many concurrent runs of the script.