Closed saswat0 closed 11 months ago
Hi @saswat0, it looks like the backend options that you would normally use to specify this, in NEON, are not currently supported with pyarmnn. I'm currently looking at adding them in, and will reply to this issue when I have more information.
Best Regards,
Finn
hi, is there any update on multi-threading options in pyarmnn?
Closing this issue as Pyarmnn is being deprecated in the 23.08 release therefore multi-threading support for the NEON backend will never be added.
I was running certain models for benchmarking purposes and happened to notice that pyarmnn uses only 1 core during inference (attached screenshot below)
Is there any way that I can facilitate multi-core usage? This would bring down the inference times by a large factor. My node has dual CPUs (one with 4 cores and another with 2) so sharing across 4/6 should also be fine
I found similar threads #92 #75 but the answers seemed inconclusive for pyarmnn. Any help in this regard would be great
Thanks in advance