larq / compute-engine

Highly optimized inference engine for Binarized Neural Networks
https://docs.larq.dev/compute-engine
Apache License 2.0
242 stars 34 forks source link

Benchmarking custom model #735

Closed aqibsaeed closed 2 years ago

aqibsaeed commented 2 years ago

Hi,

Is it possible to run benchmarking on a model (e.g., a custom model built/trained using larq) as described here (https://docs.larq.dev/compute-engine/benchmark/) under Android phone section?

Thanks in advance.

aqibsaeed commented 2 years ago

I was able to run it on android phone, but I have another question about inference times.

"Inference timings in us" what does "us" mean? is it micro seconds?

aqibsaeed commented 2 years ago

Got it from here. https://docs.larq.dev/compute-engine/end_to_end/

Tombana commented 2 years ago

Good to hear that you got it working!

"Inference timings in us" what does "us" mean? is it micro seconds?

That is correct.