AIWintermuteAI / aXeleRate

Keras-based framework for AI on the Edge
MIT License
177 stars 71 forks source link

Support for yolov4/5 #38

Closed rickymedrano closed 3 years ago

rickymedrano commented 3 years ago

Do you plan to support yolov4 (darknet) or yolov5 (pytorch)?

AIWintermuteAI commented 3 years ago

Well, I won't be using PyTorch or Darknet - aXeleRate is based on tf.keras and that won't change. I could just port YOLOv4 or v5 to tensorflow or use existing port - but I do not think that v4 or v5 worth it. Upgrade to v3 however is necessary and long overdue.

Do you have a specific use case for v4 or v5?

rickymedrano commented 3 years ago

@AIWintermuteAI I was hoping to use the increased FPS found in Yolov4 or Yolov5 for object detection on a Jetson Nano, for near real-time detection. I converted my yolov5s model to .onnx, then used your onnx_to_rt.py conversion successfully, but when running classifier_video.py with the .plan file, it I get a ValueError: could not broadcast input array from shape (150528) into shape (519168) which I assume has to do with AxeleRate not supporting Yolov5.

AIWintermuteAI commented 3 years ago

There is confusion about quite a few things here, let me clarify. YOLO is a detector architecture, which is different from classifier. To understand the difference, look at this picture with cats :) image This is why running classifier_video.py with object detection model produces an error it has nothing to do with YOLO v5 support.

aXeleRate is a package, that is meant for training and conversion of CV neural network models - and you're supposed to use the example scripts provided for inference with models trained using aXeleRate. It is not specific to my framework - different network architectures have different outputs and thus require different post-processing (parsing the inference results). You cannot just take a random model, take an inference script from another repository and expect it to work.

Having said all of that, NVIDIA Dev boards are really special here - I would recommend you using their Transfer Learning toolkit for training the object detector models. It is a bit of hassle to get it running with Docker, but the end result will be better, since it is already tailored to NVIDIA boards.

https://developer.nvidia.com/transfer-learning-toolkit

Use MobileNet v1 or v2 backbone and detection layer of your choice - could be YOLO or DetectNet.