Closed dhirajpatnaik16297 closed 3 years ago
Hi,
You need to edit the yololayer.h and yolov5.cpp files in tensorrtx/yolov5 according to you custom model before generate engine. Copy the edited yololayer.h file from tensorrtx/yolov5 and replace the nvdsinfer_custom_impl_Yolo/yololayer.h file before compile.
Hi Thanks for replying back. Do i need to do this for the onnx -> engine right? Also could you guide me through this what particular changes do i need to do? And kindly do let me know what should i do for converting custom model to .wts?? I am very new to this so your help is appreciated. Thanks
Yes. I am following it. There are 2 separate ways I am trying.
For (2) I will surely follow what you said. Could you please suggest some ways to address the issue in (1).
Send me your output log from wts conversion
Hi I am not able to get any log while conversion.
Sorry, I don't know about this error you are getting.
Ok let me explain. I followed up the steps provided. Copied the gen_wts.py to my folder and used the custom model with one class best.pt to convert to best.wts but the best.wts is not formed although the program runs without any errors. Unlikely, when I try the pretrained model yolov5s.pt and convert using the python program, yolov5s.wts is formed. This is the issue. After I get the .wts file I shall do the changes to generate the tensorrt engine file. So how shall I get the .wts file for a custom model? First
Send your model file to my e-mail (available in my GitHub profile) and I will test it.
You can access it from here. I have sent a mail too. https://drive.google.com/file/d/11-SZBr4rgXV3oZsB3f3FEu3Q8Xm4h0cM/view?usp=sharing
I sent the converted wts file to your email, please check.
Thanks a lot. I will check and let you know .Please send the code so I can use it for other models. Or let me know what changes should I make.
Rename the best.pt to base model (yolov5s.wts for example) and run the command.
Hi Marcos. Apologies for getting back so late. I was able to run it after renaming. Thanks a lot. Now i am trying to basically add an ocr model (same yolov5 model) to the output of the detector model. I followed the same process for conversion and was successful, then i added the secondary gie but am not able to run it. Could you please help out. Also i am trying to deploy the ocr model in the triton server in a g4 instance. If you could throw some light on that, that would be great. Thanks
Hi Marcos I have tried out what is mentioned. I am getting the following error::
deepstream-app -c deepstream_app_config.txt
Using winsys: x11
0:00:04.539955749 12050 0x1efdad90 INFO nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger:
1 OUTPUT kFLOAT prob 6001x1x1
0:00:04.540151848 12050 0x1efdad90 INFO nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger:
1 OUTPUT kFLOAT prob 6001x1x1
0:00:05.565355958 12050 0x1efdad90 INFO nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger:
Runtime commands: h: Print this help q: Quit
p: Pause
r: Resume
NOTE: To expand a source in the 2D tiled display and view object details, left-click on the source. To go back to the tiled display, right-click anywhere on the window.
PERF: FPS 0 (Avg)
PERF: 0.00 (0.00)
** INFO:
Opening in BLOCKING MODE
Opening in BLOCKING MODE
NvMMLiteOpen : Block : BlockType = 261
NVMEDIA: Reading vendor.tegra.display-size : status: 6
NvMMLiteBlockCreate : Block : BlockType = 261
** INFO:
PERF: 13.09 (11.73)
/dvs/git/dirty/git-master_linux/nvutils/nvbufsurftransform/nvbufsurftransform.cpp:3494: => VIC Configuration failed image scale factor exceeds 16, use GPU for Transformation
0:00:06.785237835 12050 0x1eade400 WARN nvinfer gstnvinfer.cpp:1277:convert_batch_and_push_to_input_thread:
Kindly let me know what is wrong. Thanks.
Add these lines in config_infer_secondary1.txt in [property] section
input-object-min-width=40
input-object-min-height=40
Added these and got it running but not able to see output of the second model (OCR) in the output video. Its only detecting the number_plate. I have tried out reducing and enlarging text size but could not find anything. Also if i add batch-size to individual config_infer files i am getting an error so i removed. Please let me know where else i am going wrong. Thanks
Try lower values in these parameters I sent.
No. Still no luck by even lowering the values or increasing the values. Lowering the values is making the program stuck after few seconds.
Please let me know where the problem lies. Thanks
Do below change in TensorRTX yololayer.cu file before conversion (engine generation) in your OCR model only.
lines 164 to:
const char* YoloLayerPlugin::getPluginVersion() const
{
return "2";
}
line 279 to:
const char* YoloPluginCreator::getPluginVersion() const
{
return "2";
}
I have changed these in the sgie1 folder for deepstream. Did not help. I changed these and built the tensorrtx again. getting this when i use ./yolov5 -s yolov5s.wts yolov5s.engine s after making the changes. Loading weights: yolov5s.wts [06/18/2021-23:15:48] [E] [TRT] INVALID_ARGUMENT: getPluginCreator could not find plugin YoloLayer_TRT version 1 Segmentation fault (core dumped) Should it return the number of classes the model has?
Did you recompile the tensorrtx/yolov5?
Yes i did. I cloned a new repo of tensorrtx, went to yolov5, made a build directory then "cmake .." and then "make" after the changes in yololayer.h and yololayer.cu. I have a doubt. In the changes you mentioned what is "2" in "return "2""? Is it the no. of classes?
It's the plugin version (id). If you have 2 plugins (2 models) with same version, it will bug in DeepStream (both models will use the same compiled lib). I will check about it.
Ok sure. Please let me know. Thanks
Did you change the two lines (164 and 279)?
I found the problem, please change tensorrtx/yolov5/common.hpp line 272 to:
auto creator = getPluginRegistry()->getPluginCreator("YoloLayer_TRT", "2");
Recompile and try again.
Did you change the two lines (164 and 279)? Yes i did, both in tensorrtx and in deepstream files as well.
I found the problem, please change tensorrtx/yolov5/common.hpp line 272 to:
auto creator = getPluginRegistry()->getPluginCreator("YoloLayer_TRT", "2");
Recompile and try again.
Thanks. Now the engine file is created but still no output of OCR model is shown in the video.
Hi Marcos I am still not getting the OCR output in the output video even after making the changes. Must have closed the issue by mistake.
The YOLOv5 converted model has less accuracy than PyTorch model, maybe that's the problem.
These are small models. Should training a medium or large model help?
You need to train and test.
Ok will try for sure. Is there any other way like from .pt--> .onnx -> .engine? Reason being I have tested out the onnx model for small and medium yolov5 and have deployed in triton server in AWS. They work great but could not get them working for nano.
If you convert to onnx, it probably won't be able to run with files from this repo. The model output will change.
Yes that is correct but could you please help out on that? Or provide some material from where i can take it forward.
You need to look at NVIDIA forums about output for ONNX model.
I trained 2 custom models of yolov5s and yolov5m for number plate detection. Then I used the onnx models of both to check the detections in deepstream 5.1. I followed the steps provided but could not get the detections correct. Its detecting top left corner always as a number plate. It would be really helpful if I get a custom bbox parser function for this.
Also I have tried out the conversion to .wts but the custom model is not converting where as the pretrained yolov5s.pt is converted easily. Please let me know where I am going wrong? Thanks in advance