-
Raised by @MridulS at the Feb 27 prep-meeting.
-
Performance is an important strength of CuPy.
For better quality assurance, it is nice to have a way to "fixed-point observation" of performance.
By having a baseline set of benchmarks, we can:
*…
-
We have some benchmarks (see also #364), but I run them by hand.
We should automate this as much as possible. We should run benchmarks on the master commits automatically, for example using the [Ai…
-
Looks like [Strict Reference mode](http://velocity.apache.org/engine/1.7/user-guide.html#strict-reference-mode) is not supported, correct?
-
The basic benchmarks we have implemented are not part of CI yet. I need to do a bit of research on how we can do that , and what we should compare benchmarks to.
Can we use GitHub runners or do we …
sfmig updated
2 months ago
-
Is it possible to run the benchmark for each of published tag of a repo?
Currently I am using --steps to limit the number of commits it selects. It will be better if one can run against each of th…
-
What are the core features? Is the mem.py and peakmem.py really core?
-
@priseborough Would be awesome to explain the drag fusion parameters and how to tune them.
https://dev.px4.io/en/advanced/parameter_reference.html
-
There is a run() function being called in the asv/benchmark.py however it's not defined.
Fixing this will require you to first define the function before being used.
To reproduce the bug, run:
…
-
Currently, all of the benchmark classes in `benchmarks/benchmarks.py` define two identical benchmarks, and only differ in the additional parameters and the `setup()` function.
Ideally, we should re…