Closed DiTo97 closed 1 year ago
Federico, we currently only support x86 (Intel and AMD) architectures. We have been working on ARM support over the past year and currently have Alpha support for DeepSparse on AWS Graviton and Ampere.
We will be releasing general support for ARM servers over the coming months and will then pivot to working on lower powered embedded devices like RPi and from vendors like Rockchip, TI, NXP, Qualcomm, MediaTek, etc
We have not yet benchmarked on these systems, but have seen our technology work very well on lower powered x86 platforms and expect to offer realtime performance on embedded devices.
If you would like early access to the ARM support, please sign up https://neuralmagic.com/deepsparse-arm-waitlist/
Thanks! Rob
@rib-2 any update for support for ARM platform? I saw the https://neuralmagic.com/blog/yolov8-detection-10x-faster-with-deepsparse-500-fps-on-a-cpu/ blogpost and very curious to see the inference on the Jetson Orin platform
As we have to train a custom object detection model on the edge that should run fully on CPU on a Raspberry Pi 4 microboard, I am considering fine-tuning a custom YOLOv8 model optimized with DeepSparse.
Assuming that the YOLOv8 guide works for custom YOLOv8 models other than the original YOLOv8 from the Ultralytics repository, I was wondering if you are now supporting ARM devices as well, as #401 and #494 have been closed and I could not find any clear indication over the last six months.
If that was the case, do you have any idea of the throughput the YOLOv8 model could have on a Raspberry Pi 4 microboard or should I look for more efficient models (e.g., YOLOv5, EfficientDet, MobileNetv2, etc.)?