Open jerryhitit opened 6 years ago
Hi Jerry! Thank you for your interest in our work. I didn't remember making any modification of the function get_region_boxes()
. Our framework is based on YOLOv2, which is not the latest YOLOv3. If you switch the corresponding C and header files under the YOLOv2 repository by our given ones, you should be able to successfully make and test using the commands in the .sh files. Let me know if you still have any question.
Given inspired by your instruction, I find the older version of the darknet repository and replace the corresponding C head file and other source files. It turns out well. But in the inference step, the given weight file yolo-voc_final.weights can not even predict the frm.png ![frm](https://user-images.githubusercontent.com/26237827/43766615-80058c18-9a65-11e8-958a-37e5edfa160e.png)
file, the real image from your challenge in the _CAM_CALIPL\data.
I use this command to test the slightly changed network structure yolo-voc.cfg and its trained weight file
./darknet detect cfg/yolo-voc.cfg ./yolo-voc_final.weights /home/eini/frm.png
In contrast, using the original yolov2 structure and the corresponding weight can acturally get some result.
And furthermore, the following command aims to get some car identification result still get no ideal output.
./darknet procimgflr cfg/aicity.data cfg/yolo-voc.cfg yolo-voc_final.weights /home/eini/img1/ /home/eini/detimg1/ /home/eini/img1/det.txt .1 .5 0 1673
Only the prediction time and without any result.
In this situation, how can I fix this problem? Do I need to retrain the model?
Thank a lot! Jerry
Hello Jerry! Sorry about the delay in response. We tested the model again, and the results should be fine. You can view a demo here. If you still have any issue, I suggest you to switch to the latest YOLOv3. The precision of the default model is pretty good. You can utilize our code to transform the output into the required format of MOTChallenge. You can also train your own model depending on the data you are using. Good luck with your research project.
Hi,
Can you tell where can I find the version of YOLO v2 code that your code is based on? I find there are only three branches of darknet: master, tjluyao-master and yolov3. If it is based on one commit, can you tell the commit id?
I choose Dec 27, 2017, as the basic repository to test the YOLO_VEH_IPL. The repository link is here https://github.com/pjreddie/darknet/tree/6e7914530939aeaa4e6cc1a692b15e23d5173ae0. You can download this repository to have a try. After replacing the corresponding head file and source files, the Compile should work well. However, I do not choose the YOLOv2 version to do those detection jobs. I switch it to the YOLOv3 version. Just adding the output function and retraining it on our own labeled image sets. It works quite well.
@jerryhitit Thanks for the information.
@jerryhitit did you change anchor points while training yolov3.\? Also, did you label the nvidia data by yourself?
I do not change the anchor point of YOLOv3, and the dataset I used to train the detector is our own video survillance images. I don't have the NVIDIA challenge data. @sfarkya
@jerryhitit Thanks for the info.
Dear all, sorry about all the confusion caused. Because YOLOv3 had not been released during the time of this challenge, we used the old version (YOLOv2). I just modified both Track1/YOLO_VEH_IPL
and Track3/YOLO_LP_IPL/
to include all the necessary files for compilation. Now both directories should be self-contained, which means you don't need to download YOLOv2 and substitute the corresponding files by our version. Hope our repository will be helpful towards your research and work.
In which file does the function output_detetctions that you declare in darknet.h are implemented? I did not find it, is it a system function?
@zhengthomastang In which file does the function output_detetctions that you declare in darknet.h are implemented? I did not find it, is it a system function?
@LionelLeee At Line 310 of this file: https://github.com/zhengthomastang/2018AICity_TeamUW/blob/master/Track1/YOLO_VEH_IPL/src/image.c
I choose Dec 27, 2017, as the basic repository to test the YOLO_VEH_IPL. The repository link is here https://github.com/pjreddie/darknet/tree/6e7914530939aeaa4e6cc1a692b15e23d5173ae0. You can download this repository to have a try. After replacing the corresponding head file and source files, the Compile should work well. However, I do not choose the YOLOv2 version to do those detection jobs. I switch it to the YOLOv3 version. Just adding the output function and retraining it on our own labeled image sets. It works quite well.
Could you please mention the steps for replacing YOLOv2 to YOLOv3?
@Ujang24 The easiest way is to run the pretrained models (ImageNet or COCO) and change the output format to match with our definition. Or you can follow the tutorial of YOLOv3 to train your own models.
Thank you for your interesting codes. I have used "./darknet detector demo cfg/coco.data cfg/yolov3.cfg yolov3.weights
@Ali-Parsa You can save the detection results in MOTChallenge format similar to here: https://github.com/zhengthomastang/2018AICity_TeamUW/blob/master/Track1/3_YOLO_VEH/src/image.c#L381-L383
In the Second Component of Track 1, YOLO_VEH_IPL, the function " void get_region_boxes()" is slightly different from the orginal defination in darknet, while its implementation can not be found in this repository.
In the head file, the function get_region_boxes is decleared as follow:
void get_region_boxes(layer l, int w, int h, int netw, int neth, float thresh, float **probs, box *boxes, float **masks, int only_objectness, int *map, float tree_thresh, int relative);
The orginal function get_region_boxes somehow cannot be found in https://github.com/pjreddie/darknet either. However, I found the implementation in this repository https://github.com/hgpvision/darknet, and the definition about this get_region_boxes function is slightly different from yours one:void get_region_boxes(layer l, int w, int h, int netw, int neth, float thresh, float **probs, box *boxes, int only_objectness, int *map, float tree_thresh, int relative):
The only diffenence isfloat **masks
Considering your other two customised functions
draw_detections
andoutput_detections
, they both usefloat **masks
. In that way, I wander wheather the Customised function "get_region_boxes" is missing in this repository?Thank a lot!