jatinchowdhury18 / RTNeural

Real-time neural network inferencing
BSD 3-Clause "New" or "Revised" License
543 stars 57 forks source link

Benchmark results #132

Open 7sharp9 opened 3 months ago

7sharp9 commented 3 months ago

I wonder is any benchmark results could be published so that when testing the libray we can get a comparison is ebverything is running ok?

When ikind of result would you expect for runing the command:

./build/rtneural_model_bench
Measuring non-templated model...
Processed 100 seconds of signal in 117.635 seconds
0.850086x real-time

Thats what I get for release mode with the default backend. Is that expected or would you be expecting faster than real-time?

jatinchowdhury18 commented 3 months ago

Hello!

The benchmarks included in the repository here are run regularly as part of the repository CI pipeline. Here is the latest result from the CI benchmarks using the Eigen backend.

Run ./build/rtneural_model_bench
Measuring non-templated model...
Processed 100 seconds of signal in 2.5133 seconds
39.1952x real-time

I don't know exactly what CPU was used for running the benchmarks either in the CI pipeline, or in your local tests, but I would expect most consumer CPUs to perform as good or better than the CI pipeline.

I would be curious to see the commands your using to compile the benchmarks as well as to configure the benchmarks via CMake?

If you are trying to run the benchmarks on a very low-power CPU (maybe some embedded device), then the benchmark model may indeed be too much computation to run in real time on that device. In that case, you may want to run the individual "layer" benchmarks to get a sense of where the computational limits are for that device.