jerryscript-project / jerryscript

Ultra-lightweight JavaScript engine for the Internet of Things.
https://jerryscript.net
Apache License 2.0
6.95k stars 672 forks source link

Document how to run benchmarks #4521

Open Hexxeh opened 3 years ago

Hexxeh commented 3 years ago

Hi!

I'm wondering how the performance and stack usage benchmark output seen in some PR comments can be generated. I noticed https://github.com/jerryscript-project/jerryscript/blob/8edf8d6eea4327dd83b7fabddcae4ea23bf98fb9/tools/runners/run-benchmarks.sh which seems related, but I'm not sure where the referenced files with the test cases are.

I'm interested in measuring the collective performance improvement between an older commit and the current master commit to understand the improvements made across time.

Thanks! Liam

rerobika commented 3 years ago

Hi @Hexxeh!

The bench results you see in several PRs are created with an internal benchmark system runs on an RPi2. TBH I don't know much about the run-benchmark.sh but let me cc @galpeter or @bzsolt .

bzsolt commented 3 years ago

Hi!

Currently I can't make any effort to deal with this issue.

In the tools directory there are a few outdated tool:

as I know they require some patch to run again.

... or may we can replace all of them with a Python-variant (e.g. benchmark.py).

What does it really need?

How?

Mostly this is a text processing task.

I hope it helps!

Hexxeh commented 3 years ago

Are those testcase JS files you mentioned available, for somebody to replicate the timing scripts etc.