Closed aqibsaeed closed 2 years ago
I was able to run it on android phone, but I have another question about inference times.
"Inference timings in us" what does "us" mean? is it micro seconds?
Got it from here. https://docs.larq.dev/compute-engine/end_to_end/
Good to hear that you got it working!
"Inference timings in us" what does "us" mean? is it micro seconds?
That is correct.
Hi,
Is it possible to run benchmarking on a model (e.g., a custom model built/trained using larq) as described here (https://docs.larq.dev/compute-engine/benchmark/) under Android phone section?
Thanks in advance.