Closed hghari closed 3 years ago
@hghari I'm not qualified to answer this as I have no experience with the cited hardware, but I'll leave this open for community support! Good luck.
@hghari I'm working on this direction as well - though I don't yet have a solution.
Let's stay in touch throughout the process :D
Did you start by converting ONXX to OpenVINO?
@glenn-jocher thanks @Jacobsolawetz I would like to. thanks. Exactly I use export.py to convert the model to ONNX (with opset =10) and then use openvino to convert this model to IR (bin and xml).
@hghari, very nice, @jimsu2012 and I did a similar conversion.
We just received the NCS in the mail today so will be trying to deploy in the next few days.
We will keep you posted of any success there!
@Jacobsolawetz Looking forward to hear from you.
@Jacobsolawetz hi, I gave up using yolo v5 model because of inconsistencies between cpu and ncs2 results. Please inform me if you had any success. thanks
@hghari makes sense - none yet. Will post here if i find some success
I am working on this issue as well. There are two problems:
1) there's a bug in the code, the part on self.export and self.training somehow don't work as they should. When you put self.export = True it does not set the self.training value as False. Therefore you only get the bounding boxes, i.e. three outputs of size 1x3x80x80x9, 1x3x40x40x9 and 1x3x20x20x9 and I have checked and they match (with a good approximation) with the outputs of the PyTorch model.
2) If you resolve the part about self.export and self.training then you can convert the model successfully with opset =11 but the ONNX conversion fails with opset = 10.
@hghari hi, which model did you use to be able to convert into onnx and eventually into the IR of openvino? im using openvino 2020.1, pytorch 1.5 and seems like im stuck on converting the onnx model of yolov5s (which i edited the export script into opset 10) to openvino
I used the model provided on github
The same struggle here, please post any progress you might have!
The same struggle here, please post any progress you might have!
using the latest openvino, i managed to convert to IR, although weird behavior as mentioned in this response
I am working on this issue as well. There are two problems:
- there's a bug in the code, the part on self.export and self.training somehow don't work as they should. When you put self.export = True it does not set the self.training value as False. Therefore you only get the bounding boxes, i.e. three outputs of size 1x3x80x80x9, 1x3x40x40x9 and 1x3x20x20x9 and I have checked and they match (with a good approximation) with the outputs of the PyTorch model.
- If you resolve the part about self.export and self.training then you can convert the model successfully with opset =11 but the ONNX conversion fails with opset = 10.
i decided to not use yolov5 and went for v4 instead, but i think you will have to play with the export script to make it functional
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
Didn't try but this article seems to deal with the same problem: https://medium.com/analytics-vidhya/the-battle-to-run-my-custom-network-on-a-movidius-myriad-compute-stick-c7c01fb64126
@hghari Hi, how to convert yolov5 to openVINO? Could you share the methods? Thanks.
I may be late for the party, but I managed to run a yolov5 network on the NCS2. These were the steps that worked for me:
The generated IR should run on the NCS2 and return the same output as CPU inference
I may be late for the party, but I managed to run a yolov5 network on the NCS2. These were the steps that worked for me:
- export model to ONNX with export script (arguments: --img-size 640 --batch-size 1)
- convert to openvino IR with mo.py --input_model my_model.onnx -s 255 --data_type FP16 --output_dir ir_dir
The generated IR should run on the NCS2 and return the same output as CPU inference
Brother, I can't get the correct result using the method you said. When using mo.py to convert to an IR model, if you don't add "-s 255", the model I get can detect the correct result on the CPU, but the correct result cannot appear on the NCS2. But after adding "-s 255", I can't detect the correct result on the CPU and NCS2.
I may be late for the party, but I managed to run a yolov5 network on the NCS2. These were the steps that worked for me:
- export model to ONNX with export script (arguments: --img-size 640 --batch-size 1)
- convert to openvino IR with mo.py --input_model my_model.onnx -s 255 --data_type FP16 --output_dir ir_dir
The generated IR should run on the NCS2 and return the same output as CPU inference
Brother, I can't get the correct result using the method you said. When using mo.py to convert to an IR model, if you don't add "-s 255", the model I get can detect the correct result on the CPU, but the correct result cannot appear on the NCS2. But after adding "-s 255", I can't detect the correct result on the CPU and NCS2.
The flag -s 255 sets the expected scale of the input image. I guess you perform a normalization of the image to range 0-1 before inference (something like img/=255). Make sure your input has range 0-255 by excluding this normalization when using a model converted with -s 255. Without -s 255 use range 0-1 instead.
@hghari The conversion of yolov5 to OenVINO can be refered to yolov5_demo. The results of NCS may be different from CPU, because the different optimization method. If you want to get correct results on NCS, please contact me.
@violet17 @Jacobsolawetz @yurikleb @hghari @usamahjundia good news 😃! Your original issue may now be fixed ✅ in PR #6057. This PR adds native YOLOv5 OpenVINO export:
python export.py --weights yolov5s.pt --include openvino # export to OpenVINO
To receive this update:
git pull
from within your yolov5/
directory or git clone https://github.com/ultralytics/yolov5
againmodel = torch.hub.load('ultralytics/yolov5', 'yolov5s', force_reload=True)
sudo docker pull ultralytics/yolov5:latest
to update your image Thank you for spotting this issue and informing us of the problem. Please let us know if this update resolves the issue for you, and feel free to inform us of any other issues you discover or feature requests that come to mind. Happy trainings with YOLOv5 🚀!
I am trying to get my own Yolov5L model running on the Raspberry Pi (B, v1.2) with the NCS2. But I get extremely bad results. Is it normal that the NCS2 performs so much worse than a CPU?
Compared to the output from the CPU inference, the decimal places from the NCS2 results are very inaccurate. Does this have something to do with the FP16 conversion?
Can anyone give me tips for the yolov5 inference workflow in Python on the NCS2? I already exported the model as FP16 and followed the structure of detect.py. But the results are so bad...
@ca-schue Hi, im trying to run yolo model on raspberry pi with ncs2 as well. Did you manage to do it? If so, do you mind sharing your code to carry out inferencing?
Yes I have done it. This blog records the details and codes. But if conditions permit, I strongly do not recommend using the Raspberry Pi plus NCS2 solution. The speed is reallly slow even with NCS2 (about 2fps). Maybe Jetson Nano is a better solution(about 15fps without any acceleration).
@Rainbowman0, Hi thanks for the reply, do you have an english version of this document, as I can't fully access this website and I do not speak Chinese. If possible, can you provide your emails or contacts as I have a few questions that I would like to ask?
@Rainbowman0 when I try to convert from ONNX to IR I get the following error, do you know how to solve it?
C:\Program Files (x86)\Intel\openvino_2021.4.582\deployment_tools\model_optimizer>python mo.py --input_model=yolov5s.onnx --model_name yolov5OV --scale=255 --data_type=FP16 Model Optimizer arguments: Common parameters:
[ ERROR ] ---------------- END OF BUG REPORT -------------- [ ERROR ] -------------------------------------------------
Can you just tell us how to fix it?
@hghari The conversion of yolov5 to OenVINO can be refered to yolov5_demo. The results of NCS may be different from CPU, because the different optimization method. If you want to get correct results on NCS, please contact me.
@Rainbowman0 when I try to convert from ONNX to IR I get the following error, do you know how to solve it?
C:\Program Files (x86)\Intel\openvino_2021.4.582\deployment_tools\model_optimizer>python mo.py --input_model=yolov5s.onnx --model_name yolov5OV --scale=255 --data_type=FP16
The command is correct. The reason for my poor results was that I forgot the -s 255 parameter to normalize the color space. It is important that python (or Anacoda) is run as administrator/root.
Can you just tell us how to fix it?
@hghari The conversion of yolov5 to OenVINO can be refered to yolov5_demo. The results of NCS may be different from CPU, because the different optimization method. If you want to get correct results on NCS, please contact me.
I think @violet17 is talking about non max suppression. The code for nms from yolov5/general.py should work. Strangely, the inference behaves differently for images over about 1000 px. For example, if inference is done five times in a row on p6 models like yolov5s6 at 1280 px with the same image, only the result of every second inference is correct. I think there is an overflow or memory leak somewhere in openvino.
I have successfully convert the onnx model to openvino model, running the detect.py using weights yolov5_openvino_model work great, however when i used the xml,bin to openvino environment with object detection code provided by intel, it just return a black screen in cv2.imshow(), any idea on this?
@glennford49 if you have problems running Intel code you should probably raise that with Intel
@glennford49 which OpenVINO environment are you using? the current export.py --include OpenVINO converts to openvino2022 version. Are you running openVINO on Windows or any other system? If you're using OpenVINO 2021 or previous version, you need to convert to ONNX and use model optimizer from openVINO to convert to the IR format. You may refer to this thread if you're interested in how I managed to solve my problem.https://github.com/openvinotoolkit/openvino/issues/11458
Hi @glennford49 @Averen19 @Humni @ca-schue @Sanoronas @violet17 @hghari
sorry to resurrect an old thread. Did anyone ever get inferencing running with an NCS2 ?
I've got an NCS2- but the documentation from Intel is absolutely dreadful (in my opinion) I've never been able to put it to use & I've spent a fair amount of time trying
Maybe I should just give up & use my DepthAI/Luxonis device or Jetson ??
Andrew
I have the same problem as you, and this troubled me two days. I am using yolov5 tag v4. Finally, this thread can help a lot. https://github.com/openvinotoolkit/openvino/issues/11458 update your yolov5 to tag v6.1, follow up below commands.
wget https://github.com/ultralytics/yolov5/releases/download/v6.1/yolov5s.pt git clone https://github.com/ultralytics/yolov5 pip install -r yolov5\requirements.txt pip install onnx python yolov5\export.py --weights yolov5s.pt --include onnx python "C:\Program Files (x86)\Intel\openvino_2021\deployment_tools\model_optimizer\mo.py" --input_model yolov5s.onnx --scale 255 --reverse_input_channels --output Conv_198,Conv_217,Conv_236 --data_type FP16
@bt5-coder OpenVINO models should work with NCS2 by setting L370 here to MYRIAD: https://github.com/ultralytics/yolov5/blob/27d831b6e4ae4b0286ba0159f5c8542e052cd3c9/models/common.py#L361-L374
❔Question
Hello, I have successfully converted the trained yolov5 model to Intermediate representation to use it with NCS2. However when I load the model to ncs2 it gives wrong results which are all negative values. Loading the same model on CPU runs without any problem and gives correct values. The question is can yolov5 be used in NCS2 and if yes what are the right steps to make it work correctly? Thanks in advance
Additional context