Closed buiduchanh closed 4 years ago
Hi, Once you have generated the tensorRT file in c++ you can use also python. Although I never tested in python and I don't know how to import the plugins
@ceccocats @buiduchanh I have completed the interface of python calling yolo4,I might submit the code next week。
@ioir123ju Thank for your work. Can your share this code and tutorial ?
@ceccocats @buiduchanh I have submitted the PR.
@ioir123ju Thank for your work. Let me try and report soon :D
@ioir123ju @ceccocats
Sorry, I have a question. I exported darknet weight to layers and debug but I can not find the *rt file. How to generate it ?
Thanks
@ ioir123ju @ceccocats 抱歉,我有一个问题。我将暗网权重导出到图层并进行调试,但是找不到* rt文件。如何产生呢? 谢谢
Please run this command first
export TKDNN_MODE=FP16
cd build
./test_yolo4
@ioir123ju Thanks. I have two questions First, I think ./test_yolo4 is generate from ./tests/darknet/yolo4.cpp so If I want export my yolov4(traning from custom data and different class), should I change the cfg_path + name path + binpath in yolov4.cpp Second, I used command for generate rt file and success but when I run darknetTR.py , I can not find libdarknetTR.so in folder build. How to generate this file ? Thanks
@ ioir123ju 谢谢。我有两个问题, 首先,我认为./test_yolo4是从./tests/darknet/yolo4.cpp生成的,因此如果我要导出yolov4(从自定义数据和其他类转换),是否应该更改cfg_path +名称路径+ binpath在yolov4.cpp中, 其次,我使用命令生成rt文件并成功,但是当我运行darknetTR.py时,在文件夹构建中找不到libdarknetTR.so。如何生成此文件? 谢谢
I think put your yolo4 layers in build/yolo4/ is better.
No need to modify yolov4.cpp.
get layers
git clone https://git.hipert.unimore.it/fgatti/darknet.git
cd darknet
make
mkdir layers debug
./darknet export <path-to-cfg-file> <path-to-weights> layers
If you recompile with my CMakelist.txt, you will have libdarknetTR.so.
@ioir123ju sorry, I clone this git from your repo.
git clone https://github.com/ioir123ju/tkDNN
mkdir build
cd build
cmake ..
make
This is the result of build folder. :( But I dont find this libdarknetTR.so. Can you help me ?
@ioir123ju can you check this ?
@ioir123ju can you check this ?
Check your CMakelist.txt for the following information
add_library(darknetTR SHARED demo/demo/darknetTR.cpp) target_compile_definitions(darknetTR PRIVATE LIB_EXPORTS=1) target_compile_definitions(darknetTR PRIVATE -DDEMO_EXPORTS) target_link_libraries(darknetTR tkDNN)
@ioir123ju Sorry, I add this line below in line56 in Cmakelist.txt but when I make I have a problem. Can your provide Cmakelist.txt or some solution for this. Thanks
@ioir123ju Sorry, I add this line below in line56 in Cmakelist.txt but when I make I have a problem. Can your provide Cmakelist.txt or some solution for this. Thanks
Sorry, I am missing a file. And I submit again. I can't see your image. Please try again with new Cmakelist.txt.
@ioir123ju I saw new config and still error
@ioir123ju I saw new config and still error
Yes, It's my fault. I submit again. o(╥﹏╥)o. check out utils.h
@ioir123ju :D I built this success but when run file darknetTR.py I have an issue OSError: ./build/libdarknetTR.so: undefined symbol: _Z7xcallocmm
Can you check total follow ? Thanks
@ioir123ju :D I built this success but when run file darknetTR.py I have an issue OSError: ./build/libdarknetTR.so: undefined symbol: _Z7xcallocmm
Can you check total follow ? Thanks
utils.cpp is update .check out utils.cpp
@ioir123ju Hello. I run this code successfully. Thanks for your help
@ioir123ju @ceccocats Hello, I run this code success but I have a small question. When I turn on TkDNNmode = Fp16 and run ./test_yolo4. I saw this log , What is meaning of "Wrongs" red lines ?
By the way, when I run export with FP32, all things is normal
Is there any bugs ? Please explain it for me. Thanks
@ceccocats @mive93 @ioir123ju Hello Can you explain this error of "wrongs" red lines which I mentioned above ? I think it is the problem of different output between darknet model and tensorrt+ tkdnn. How to reslove this issue ? Thanks
Inference at fp16 use half of the precision its normal to have a bit of error, if you look numbers are not so different
@ceccocats Thanks. I understand this. I have a small problem, when compare with darknet model and tensorrt+tkdnn(FP16) model on my custom dataset, I saw the precision of class(large object) is changed slightly but precision of class(small object) is reduce ~5-15%. Is it normal ? Thanks
Hi @buiduchanh How did you compute the precisions?
@mive93 Sorry for late reply In all case, I prepare the data and caculate AP follow this git : https://github.com/Cartucho/mAP For prepare data :
In two case, I set same threshold is 0.3 but the result of small object is decrease ~5-15% AP. Can you help me to reslove problem ? Thanks
If you use FP16, I think the accuracy drop is inevitable
@ioir123ju @mive93 @ceccocats . Hello. I find a bug in this inference code of python. The result of "./demo" with image path is different from the inference code of python. The main result is reading from the path of image and read from data Please see this function This is the original code from https://github.com/ioir123ju/tkDNN
This is the code read image from image path
When read image from image path, the result is more accurate and same with run "./demo" but when use this code from @ioir123ju , some box is missing. Please the result below
This is the result from "./demo" or read image from imagepath
This is the result from python code of @ioir123ju
Can you check this ? @ioir123ju @mive93
Can you compare the difference between the two cv::Mat.And you can check if their data types are the same.
Our demo use a threshold of 0.3, what is the threshold of the python version?
@ceccocats I changed threshold of python version to 0.3 for comparision
@ioir123ju
I found this problem, In your case you using the funtion self.darknet_image = make_image(input_size, input_size, 3)
with the parameter is the input size of network, but when I change the parameter to make_image( image_width, image_height, 3)
the result is same with this demo. Because I see you have the function memcpy(im.data, pdata, h * w * c)
so if I change like I mentioned above, the speed is reduce ~15 -20%
Because I'm not famillar with C++ so may be hard to debug. Can you check it and share your solution ? Thanks
@buiduchanh I solved this bug. Please try again.
@ioir123ju Hi, I got and issue that if use image_height and image_width for size image, detection can not detect any object. When use same value (width = height) is can detect. Can you explain for me about it. Thanks you.
@ioir123ju Hi, I got and issue that if use image_height and image_width for size image, detection can not detect any object. When use same value (width = height) is can detect. Can you explain for me about it. Thanks you.
@trungtv94 This is yolo's request. (608x608 \512x512\416x416)
@ioir123ju @ceccocats Is there anything else needed to commit the PR to the main repo? I'd love to use the python binding with the latest yolov4-tiny updates.
Thank you
@ioir123ju Hi, I got and issue that if use image_height and image_width for size image, detection can not detect any object. When use same value (width = height) is can detect. Can you explain for me about it. Thanks you.
@trungtv94 This is yolo's request. (608x608 \512x512\416x416)
yes, Thanks you, I understood.
@ioir123ju
When running the python api and using yolov4 tiny, it does not work. Is there something I should do differently?
@marvision-ai Sorry, I can't see your picture. Show me the log .
@ioir123ju No problem. The code works for yolov4 but not for yolov4-tiny. see below (running on Xavier):
nvidia@nvidia:~/tkDNN$ python3 darknetTR.py build/yolo4tiny_fp16.rt --video=demo/yolo_test.mp4 build/yolo4tiny_fp16.rt New NetworkRT (TensorRT v6.01) Float16 support: 1 Int8 support: 1 DLAs: 2 Cant deserialize Plugin /home/nvidia/tkDNN/src/NetworkRT.cpp:821 Aborting...
@marvision-ai
The problem is "Cant deserialize Plugin".
c++ api and python api call the same code:
detNN->init(net, n_classes, n_batch);
I think c++ api have same problem. Can you try it? Maybe ceccocats can solve this problem.
@ioir123ju interesting... I know the c++ API for yolov4tiny works fine. This is a weird error.
@ceccocats @mive93 Do either of you know if this could be a cause of the c++ api? I would appriciate your help greatly. Thank you!
@ioir123ju Hi, ioir123ju. Thanks for the contribution of this python version of tkDNN. However, i have trouble about converting the 1-batch python inference code(darknetTR.py) into n-batch python inference code. Could you give me any cludes? I am a greenhand of C++.
Thank you~
@ioir123ju Thanks for your contribution, i using you example to draw the box. i found your box is a little bit differernt for standard dartnet yolo Is that right ?
I using following code to convert box
x, y, w, h = bbox
# darknetTR.py position
xmin = int(round(x))
xmax = int(round(x + w))
ymin = int(round(y))
ymax = int(round(y + h))
return xmin, ymin, xmax, ymax
# standard DarkNet box
# xmin = int(round(x - (w / 2)))
# xmax = int(round(x + (w / 2)))
# ymin = int(round(y - (h / 2)))
# ymax = int(round(y + (h / 2)))
First, Thank for your hard work. Your repo is so impressive but I have a question. I'm not famillar with C++ so can you provide this code wrap tkDNN + tensorrt with python inference ? Thanks