AxisCommunications / acap-computer-vision-sdk-examples

Axis Camera Application Platform (ACAP) version 4 example applications that provide developers with the tools and knowledge to build their own solutions based on the ACAP Computer Vision SDK
Apache License 2.0
50 stars 23 forks source link

Frequently asked questions #110

Closed Corallo closed 1 year ago

Corallo commented 1 year ago

This thread collects common questions that developers asked about our examples.

Which cameras can I use to run these examples?

Only Artpec7 cameras equipped with TPU and Artpec8 cameras are supported at the moment.

Can I use a different model?

Yes! You can use Ideally any model, however you have to make sure it is compatible with the hardware you want to run it on.

On CPU, you can run any model in the .tflite format.

On EdgeTPU, you have to verify that your model is compatible with the EdgeTPU. The easiest way, is just to try to convert it to the EdgeTPU format with the EdgeTPU compiler and see the result. You can also check the Supported operations when you build your model to see which operations can be used.

On Artpec8, you can run in theory any tflite model quantized in int8 format. We recommend that the model is quantized by tensor and not by channel to obtain better performance. If you are interested, take a look at our guide about how to train and quantize a model for Artpec8. If some parts of your models can't be executed by the DLPU accelerator, they will be automatically sent to the CPU, this flexibility will come with a drop in performance, if you use a model that jumps between DLPU and CPU too much, the execution will be very slow. There is also a limit in how many times the inference can be handed over from the DLPU to the CPU which is currently set to 16.

Can I run models in other formats like ONNX or PyTorch?

Unfortunately no. You could install the pytorch or onnx pip packages in your application, but these libraries won't have access to the hardware accelerators, and will run your model only on CPU.

SSD mobilenet performances are not good enough, can I run EfficientDet or YOLOv5 to improve the accuracy of the model?

Yes, however, these two model are heavier than SSD mobilenetv2.

EfficinetDet

Some users successfully used EfficientDet Lite0 from the coral website on Artpec8 obtaining an inference time of 360ms.

YOLOv5

We also saw that it is possible to use the popular YOLOv5, we recommend using the official implementation from the Ultralytics repository. In their repository, you can find also a script to convert their model to tflite (make sure to use the -int8 flag) and edgeTPU. Be aware that they quantize their model by channel, which is not the right quantization technique to get the best performance out of Artpec8 cameras.

We have collected some results from tests done with the YOLOv5 model On Artpec8 using the "small" version of YOLOv5 and input size 640x640 the inference takes 1200 ms On Artpec7 using the "nano" version of YOLOv5 and input size 224x224 the inference takes 30-40 ms.

How do I make my model faster in artpec8 cameras?

To make your model faster in artpec8 cameras, you should make sure that it is optimized for the hardware accelerator. First, verify that your model doesn't have OP that are executed on CPU (e.g. dequantize -> float OP -> requantize) in the middle of the graph. Layers like this will prevent the accelerator to process your inference, giving the task to the CPU, which will make your network slower. Another way to maximize performances is to quantize your network by tensor, see our guide. Standard Conv2D blocks should be preferred to DepthwiseCov2d.

My network has a low accuracy when used on camera frames, why? How do I improve it?

If you are using a pretrained network on the COCO dataset, you should be aware that the data distribution of that dataset can be different from camera frames distributions. Thus, it would be better to fine-tuning your network, using some camera frames.

Ask your question

If you didn't find what you were looking for, feel free to open a new thread in the discussion tab!