pjreddie / darknet

Convolutional Neural Networks
http://pjreddie.com/darknet/
Other
25.69k stars 21.33k forks source link

How to run YOLO on multiple images and save predictions to a txt file #903

Open FOX111 opened 6 years ago

FOX111 commented 6 years ago

Demonstration of YOLO is impressive! However, I'm wondering if there is a way to get predictions for a batch of images, say from a given directory, and save the names of detected classes to a txt file? I think it should be possible but I'm unfamiliar with DarkNet, so any advise will be much appreciated!

AlexeyAB commented 6 years ago

You can use this repo: https://github.com/AlexeyAB/darknet

To process a list of images data/train.txt and save results of detection to result.txt use:

./darknet detector test cfg/voc.data yolo-voc.cfg yolo-voc.weights -dont_show -ext_output < data/train.txt > result.txt

EscVM commented 6 years ago

Hi, @AlexeyAB is there a way to create multiple txt files for each prediction. Because parsing result.txt file is not so easy. I would like to use this project mAP that requires all predictions in separated files. Thx P.S.: I'm using YOLOv3

AlexeyAB commented 6 years ago

@EscVM Hi, To get mAP - you can use this repo: https://github.com/AlexeyAB/darknet and such command: ./darknet detector map data/obj.data yolo-obj.cfg backup\yolo-obj_7000.weights


If you want to get mAP by this repo https://github.com/Cartucho/mAP then try to ask how to obtain predictions in separated files in the Issues: https://github.com/Cartucho/mAP/issues

EscVM commented 6 years ago

Ok, thank you. So, no one, that you know, has already made a code that parse that result.txt file?

EscVM commented 6 years ago

https://github.com/Cartucho/mAP/issues/32

wangchu1 commented 5 years ago

Try this command: ./darknet detector valid cfg/voc.data yolo-voc.cfg yolo-voc.weights If you take a quick look in this function "validate_detector" from file darknet/src/detector.c, it actually saves detection results in all validation data list which is defined in your data cfg file. So what you can do is simply modify your data cfg file to point to your own batch of images. It outputs are much cleaner compared to ./darknet detector test.

Here is an example output from valid command: 00082 0.999969 504.637390 651.370789 610.118347 736.534363 00083 0.999979 524.560852 676.153137 664.041809 758.356995 00084 0.999882 556.716858 706.351868 727.970886 782.629456 00085 0.999651 588.336853 747.259827 803.692688 815.355164 00086 0.999325 641.701050 805.085388 901.960693 843.564392 00087 0.999820 730.968018 745.703369 953.834717 817.448608 00088 0.999969 810.657593 706.231934 989.481201 785.879639

Whereas here is what test command give you: Enter Image Path: /usr/local/faststorage/chuwang/suncg/prepared_data/yolo_2d_full_trigger/JPEGImages/00082.png: Predicted in 0.016813 seconds. trigger: 100% (left_x: 504 top_y: 650 width: 105 height: 85) Enter Image Path: /usr/local/faststorage/chuwang/suncg/prepared_data/yolo_2d_full_trigger/JPEGImages/00083.png: Predicted in 0.016660 seconds. trigger: 100% (left_x: 524 top_y: 675 width: 139 height: 82) Enter Image Path: /usr/local/faststorage/chuwang/suncg/prepared_data/yolo_2d_full_trigger/JPEGImages/00084.png: Predicted in 0.016925 seconds. trigger: 100% (left_x: 556 top_y: 705 width: 171 height: 76) Enter Image Path: /usr/local/faststorage/chuwang/suncg/prepared_data/yolo_2d_full_trigger/JPEGImages/00085.png: Predicted in 0.017895 seconds. trigger: 100% (left_x: 587 top_y: 746 width: 215 height: 68) Enter Image Path: /usr/local/faststorage/chuwang/suncg/prepared_data/yolo_2d_full_trigger/JPEGImages/00086.png: Predicted in 0.017027 seconds. trigger: 100% (left_x: 641 top_y: 804 width: 260 height: 38) Enter Image Path: /usr/local/faststorage/chuwang/suncg/prepared_data/yolo_2d_full_trigger/JPEGImages/00087.png: Predicted in 0.016829 seconds. trigger: 100% (left_x: 730 top_y: 745 width: 223 height: 72)

Might be useful to someone who is new to yolo like me :)

buzdarbalooch commented 5 years ago

@fate3439 but how it ll work for the case of mutiple objects in the frames 00082 0.999969 504.637390 651.370789 610.118347 736.534363

i know 00082 is the frame number, 0.999969 is the confidence score and rest are bbox coordinates just consider the case below Enter Image Path: /home/akhan/yolo_data/labels/001471.jpg: Predicted in 0.000000 milli-seconds. handicap: 87% car: 91% handicap: 84% 0.411916 0.405841 0.209267 0.074270 0.375321 0.649909 0.324759 0.150611 0.442939 0.200636 0.348126 0.348058

buzdarbalooch commented 5 years ago

and when i execute ./darknet detector valid cfg/obj.data cfg/yolo.cfg backup/yolo_10000.weights result.txt

i get this image

buzdarbalooch commented 5 years ago

@fate3439 hi now i understand it saves results here like that it saves even those results where the confidence score is less then a specific threshold, the only confusing aspect here in my case is how to give the images the proper path . in case its randomly slecting images. if u see the images image

srhtyldz commented 5 years ago

@fate3439 hi now i understand it saves results here like that it saves even those results where the confidence score is less then a specific threshold, the only confusing aspect here in my case is how to give the images the proper path . in case its randomly slecting images. if u see the images image

where does the folder that saved the result ?

mask_scale: Using default '1.000000' Total BFLOPS 62.669 Loading weights from yolov2_30000.weights... seen 64 Done! Learning Rate: 0.001, Momentum: 0.9, Decay: 0.0005 eval: Using default 'voc' 4 Segmentation fault (core dumped)

I have got this error .Can you anyone else get this error ?

srhtyldz commented 5 years ago

Try this command: ./darknet detector valid cfg/voc.data yolo-voc.cfg yolo-voc.weights If you take a quick look in this function "validate_detector" from file darknet/src/detector.c, it actually saves detection results in all validation data list which is defined in your data cfg file. So what you can do is simply modify your data cfg file to point to your own batch of images. It outputs are much cleaner compared to ./darknet detector test.

Here is an example output from valid command: 00082 0.999969 504.637390 651.370789 610.118347 736.534363 00083 0.999979 524.560852 676.153137 664.041809 758.356995 00084 0.999882 556.716858 706.351868 727.970886 782.629456 00085 0.999651 588.336853 747.259827 803.692688 815.355164 00086 0.999325 641.701050 805.085388 901.960693 843.564392 00087 0.999820 730.968018 745.703369 953.834717 817.448608 00088 0.999969 810.657593 706.231934 989.481201 785.879639

Whereas here is what test command give you: Enter Image Path: /usr/local/faststorage/chuwang/suncg/prepared_data/yolo_2d_full_trigger/JPEGImages/00082.png: Predicted in 0.016813 seconds. trigger: 100% (left_x: 504 top_y: 650 width: 105 height: 85) Enter Image Path: /usr/local/faststorage/chuwang/suncg/prepared_data/yolo_2d_full_trigger/JPEGImages/00083.png: Predicted in 0.016660 seconds. trigger: 100% (left_x: 524 top_y: 675 width: 139 height: 82) Enter Image Path: /usr/local/faststorage/chuwang/suncg/prepared_data/yolo_2d_full_trigger/JPEGImages/00084.png: Predicted in 0.016925 seconds. trigger: 100% (left_x: 556 top_y: 705 width: 171 height: 76) Enter Image Path: /usr/local/faststorage/chuwang/suncg/prepared_data/yolo_2d_full_trigger/JPEGImages/00085.png: Predicted in 0.017895 seconds. trigger: 100% (left_x: 587 top_y: 746 width: 215 height: 68) Enter Image Path: /usr/local/faststorage/chuwang/suncg/prepared_data/yolo_2d_full_trigger/JPEGImages/00086.png: Predicted in 0.017027 seconds. trigger: 100% (left_x: 641 top_y: 804 width: 260 height: 38) Enter Image Path: /usr/local/faststorage/chuwang/suncg/prepared_data/yolo_2d_full_trigger/JPEGImages/00087.png: Predicted in 0.016829 seconds. trigger: 100% (left_x: 730 top_y: 745 width: 223 height: 72)

Might be useful to someone who is new to yolo like me :)

I could not get this outputs.

I got ' Segmentation fault (core dumped) ' error.Can you please help me ?

spencerkraisler commented 5 years ago

You can use this repo: https://github.com/AlexeyAB/darknet

To process a list of images data/train.txt and save results of detection to result.txt use:

./darknet detector test cfg/voc.data yolo-voc.cfg yolo-voc.weights -dont_show -ext_output < data/train.txt > result.txt

I tried this and got "Cannot load image "-dont_show"" as an error.

yustiks commented 5 years ago

You can use this repo: https://github.com/AlexeyAB/darknet To process a list of images data/train.txt and save results of detection to result.txt use: ./darknet detector test cfg/voc.data yolo-voc.cfg yolo-voc.weights -dont_show -ext_output < data/train.txt > result.txt

I tried this and got "Cannot load image "-dont_show"" as an error.

I also had the same error. Try to reinstall darknet one more time and run make file again. It helped me.

prateekgupta891 commented 5 years ago

You can use this repo: https://github.com/AlexeyAB/darknet

To process a list of images data/train.txt and save results of detection to result.txt use:

./darknet detector test cfg/voc.data yolo-voc.cfg yolo-voc.weights -dont_show -ext_output < data/train.txt > result.txt

i am running this command for my network and weights, but i'm getting an error. This is the command that i tried running with the ouput. image

This is what the test.txt file contains. image

It will be great if you can help me out.

Vic-TheGreat commented 5 years ago

I am currently using another repo that I forked of YOLOv3 AlexeyAB darknet that makes it way easier to store all your input images in a folder, get your output images in another folder and a text file with all the confidence percentage of predictions done.

https://github.com/Vic-TheGreat/VG_AlexeyAB_darknet.git

it's quite easy to follow.

CHEERS!!

AnaRhisT94 commented 4 years ago

@Vic-TheGreat But it doesn't work with custom weights on saving the images with the bboxes it predicted right?

daddydrac commented 4 years ago

@EscVM Hi, To get mAP - you can use this repo: https://github.com/AlexeyAB/darknet and such command: ./darknet detector map data/obj.data yolo-obj.cfg backup\yolo-obj_7000.weights

If you want to get mAP by this repo https://github.com/Cartucho/mAP then try to ask how to obtain predictions in separated files in the Issues: https://github.com/Cartucho/mAP/issues

I get this error:

calculation mAP (mean average precision)...
Couldn't open file: coco_testdev
daddydrac commented 4 years ago

How do make it write an different image with the bounding boxes?

(So it saves as prediction-1.jpg, prediction-2.jpg, prediction-3.jpg... and so forth)

Leprechault commented 4 years ago

Ok, thank you. So, no one, that you know, has already made a code that parse that result.txt file?

I try go get the mAP in txt using ./darknet detector map obj.data obj.cfg backup/obj_100.weights -map | tee result_map.txt, but doen't work. Any ideas?

YARRS commented 4 years ago

My training of custom objects is ongoing on this repo. Can anyone help me figure out how to stop the training and where to find the trained weights?(P.S. :-Sorry if i posted it in wrong space I am naive to using this section)

tylertroy commented 4 years ago

Try this command: ./darknet detector valid cfg/voc.data yolo-voc.cfg yolo-voc.weights If you take a quick look in this function "validate_detector" from file darknet/src/detector.c, it actually saves detection results in all validation data list which is defined in your data cfg file. So what you can do is simply modify your data cfg file to point to your own batch of images. It outputs are much cleaner compared to ./darknet detector test.

Here is an example output from valid command: 00082 0.999969 504.637390 651.370789 610.118347 736.534363 00083 0.999979 524.560852 676.153137 664.041809 758.356995 00084 0.999882 556.716858 706.351868 727.970886 782.629456 00085 0.999651 588.336853 747.259827 803.692688 815.355164 00086 0.999325 641.701050 805.085388 901.960693 843.564392 00087 0.999820 730.968018 745.703369 953.834717 817.448608 00088 0.999969 810.657593 706.231934 989.481201 785.879639

Whereas here is what test command give you: Enter Image Path: /usr/local/faststorage/chuwang/suncg/prepared_data/yolo_2d_full_trigger/JPEGImages/00082.png: Predicted in 0.016813 seconds. trigger: 100% (left_x: 504 top_y: 650 width: 105 height: 85) Enter Image Path: /usr/local/faststorage/chuwang/suncg/prepared_data/yolo_2d_full_trigger/JPEGImages/00083.png: Predicted in 0.016660 seconds. trigger: 100% (left_x: 524 top_y: 675 width: 139 height: 82) Enter Image Path: /usr/local/faststorage/chuwang/suncg/prepared_data/yolo_2d_full_trigger/JPEGImages/00084.png: Predicted in 0.016925 seconds. trigger: 100% (left_x: 556 top_y: 705 width: 171 height: 76) Enter Image Path: /usr/local/faststorage/chuwang/suncg/prepared_data/yolo_2d_full_trigger/JPEGImages/00085.png: Predicted in 0.017895 seconds. trigger: 100% (left_x: 587 top_y: 746 width: 215 height: 68) Enter Image Path: /usr/local/faststorage/chuwang/suncg/prepared_data/yolo_2d_full_trigger/JPEGImages/00086.png: Predicted in 0.017027 seconds. trigger: 100% (left_x: 641 top_y: 804 width: 260 height: 38) Enter Image Path: /usr/local/faststorage/chuwang/suncg/prepared_data/yolo_2d_full_trigger/JPEGImages/00087.png: Predicted in 0.016829 seconds. trigger: 100% (left_x: 730 top_y: 745 width: 223 height: 72)

Might be useful to someone who is new to yolo like me :)

Just to expand on this answer but with a more explicit description of usage. Just to note that I am using the most recent version of YOLO as of git commit 0ff2343.

  1. Create a text file containing the absolute path to a list of images for inference.

    find  /path/to/images -type f > /path/to/images_list.txt 
  2. Modify coco.data config file for your task which defines where to find the list of images for evaluation and where to store the results.

    # From darknet root directory
    sed -e 's-coco_testdev-/path/to/image_list.txt-' \
    -e 's-/home/pjreddie/backup/-/directory/to/save/results-' \
    cfg/coco.data > cfg/my_eval_config.data
  3. Modify src/detector.c to include the image names as id. change this line (at or near line number 449) sprintf(buff, "{\"image_id\":%d, \"category_id\":%d, \"bbox\":[%f, %f, %f, %f], \"score\":%f},\n", image_id, coco_ids[j], bx, by, bw, bh, dets[i].prob[j]); to sprintf(buff, "{\"image_id\":\"%s\", \"category_id\":%d, \"bbox\":[%f, %f, %f, %f], \"score\":%f},\n", image_path, coco_ids[j], bx, by, bw, bh, dets[i].prob[j]);

  4. Remake.

    make
  5. Run inference on images using your config file.

    ./darknet detector valid cfg/my_eval_config.data cfg/yolov3.cfg yolov3.weights 
  6. Output will be saved as a JSON-like file with the name coco_results.json to the results path you specified in your cfg/my_eval_config.data file in step 2. In this case we called it /directory/to/save/results.

  7. The output JSON includes all detected objects regardless of their probability score. This can be used as is and parsed at your leisure with whatever method you please. Nevertheless, I include a post processing step here to extract objects within a given threshold. You will need jq for this step.

jq  '[ .[] | select(.score >= 0.1) ]' results/coco_results.json > results/yolo_objects_0p1thresh.json
SaGPaR commented 4 years ago

You can use this repo: https://github.com/AlexeyAB/darknet To process a list of images data/train.txt and save results of detection to result.txt use: ./darknet detector test cfg/voc.data yolo-voc.cfg yolo-voc.weights -dont_show -ext_output < data/train.txt > result.txt

i am running this command for my network and weights, but i'm getting an error. This is the command that i tried running with the ouput. image

This is what the test.txt file contains. image

It will be great if you can help me out.

the text file should contain content as /data/obj ... so on all the previous shit should be removed

YARRS commented 4 years ago

Try this command: ./darknet detector valid cfg/voc.data yolo-voc.cfg yolo-voc.weights If you take a quick look in this function "validate_detector" from file darknet/src/detector.c, it actually saves detection results in all validation data list which is defined in your data cfg file. So what you can do is simply modify your data cfg file to point to your own batch of images. It outputs are much cleaner compared to ./darknet detector test. Here is an example output from valid command: 00082 0.999969 504.637390 651.370789 610.118347 736.534363 00083 0.999979 524.560852 676.153137 664.041809 758.356995 00084 0.999882 556.716858 706.351868 727.970886 782.629456 00085 0.999651 588.336853 747.259827 803.692688 815.355164 00086 0.999325 641.701050 805.085388 901.960693 843.564392 00087 0.999820 730.968018 745.703369 953.834717 817.448608 00088 0.999969 810.657593 706.231934 989.481201 785.879639 Whereas here is what test command give you: Enter Image Path: /usr/local/faststorage/chuwang/suncg/prepared_data/yolo_2d_full_trigger/JPEGImages/00082.png: Predicted in 0.016813 seconds. trigger: 100% (left_x: 504 top_y: 650 width: 105 height: 85) Enter Image Path: /usr/local/faststorage/chuwang/suncg/prepared_data/yolo_2d_full_trigger/JPEGImages/00083.png: Predicted in 0.016660 seconds. trigger: 100% (left_x: 524 top_y: 675 width: 139 height: 82) Enter Image Path: /usr/local/faststorage/chuwang/suncg/prepared_data/yolo_2d_full_trigger/JPEGImages/00084.png: Predicted in 0.016925 seconds. trigger: 100% (left_x: 556 top_y: 705 width: 171 height: 76) Enter Image Path: /usr/local/faststorage/chuwang/suncg/prepared_data/yolo_2d_full_trigger/JPEGImages/00085.png: Predicted in 0.017895 seconds. trigger: 100% (left_x: 587 top_y: 746 width: 215 height: 68) Enter Image Path: /usr/local/faststorage/chuwang/suncg/prepared_data/yolo_2d_full_trigger/JPEGImages/00086.png: Predicted in 0.017027 seconds. trigger: 100% (left_x: 641 top_y: 804 width: 260 height: 38) Enter Image Path: /usr/local/faststorage/chuwang/suncg/prepared_data/yolo_2d_full_trigger/JPEGImages/00087.png: Predicted in 0.016829 seconds. trigger: 100% (left_x: 730 top_y: 745 width: 223 height: 72) Might be useful to someone who is new to yolo like me :)

Just to expand on this answer but with a more explicit description of usage. Just to note that I am using the most recent version of YOLO as of git commit 0ff2343.

  1. Create a text file containing the absolute path to a list of images for inference.
find  /path/to/images -type f > /path/to/images_list.txt 
  1. Modify coco.data config file for your task which defines where to find the list of images for evaluation and where to store the results.
# From darknet root directory
sed -e 's-coco_testdev-/path/to/image_list.txt-' \
    -e 's-/home/pjreddie/backup/-/directory/to/save/results-' \
    cfg/coco.data > cfg/my_eval_config.data
  1. Modify src/detector.c to include the image names as id. change this line (at or near line number 449) sprintf(buff, "{\"image_id\":%d, \"category_id\":%d, \"bbox\":[%f, %f, %f, %f], \"score\":%f},\n", image_id, coco_ids[j], bx, by, bw, bh, dets[i].prob[j]); to sprintf(buff, "{\"image_id\":\"%s\", \"category_id\":%d, \"bbox\":[%f, %f, %f, %f], \"score\":%f},\n", image_path, coco_ids[j], bx, by, bw, bh, dets[i].prob[j]);
  2. Remake.
make
  1. Run inference on images using your config file.
./darknet detector valid cfg/my_eval_config.data cfg/yolov3.cfg yolov3.weights 
  1. Output will be saved as a JSON-like file with the name coco_results.json to the results path you specified in your cfg/my_eval_config.data file in step 2. In this case we called it /directory/to/save/results.
  2. The output JSON includes all detected objects regardless of their probability score. This can be used as is and parsed at your leisure with whatever method you please. Nevertheless, I include a post processing step here to extract objects within a given threshold. You will need jq for this step.
jq  '[ .[] | select(.score >= 0.1) ]' results/coco_results.json > results/yolo_objects_0p1thresh.json

Thank you very much Sir . It helped a lot and my training amd testing is successfully completed .

Can you help how to crop the detected portion while testing . I got a solution on another threads to modify image.c here https://github.com/pjreddie/darknet/issues/1673#issuecomment-531499894

But even after modifying it didn't worked for me(i re-build the project after changes).

Thanks in Advance.

flydragon2018 commented 4 years ago

Try this command: ./darknet detector valid cfg/voc.data yolo-voc.cfg yolo-voc.weights If you take a quick look in this function "validate_detector" from file darknet/src/detector.c, it actually saves detection results in all validation data list which is defined in your data cfg file. So what you can do is simply modify your data cfg file to point to your own batch of images. It outputs are much cleaner compared to ./darknet detector test.

Here is an example output from valid command: 00082 0.999969 504.637390 651.370789 610.118347 736.534363 00083 0.999979 524.560852 676.153137 664.041809 758.356995 00084 0.999882 556.716858 706.351868 727.970886 782.629456 00085 0.999651 588.336853 747.259827 803.692688 815.355164 00086 0.999325 641.701050 805.085388 901.960693 843.564392 00087 0.999820 730.968018 745.703369 953.834717 817.448608 00088 0.999969 810.657593 706.231934 989.481201 785.879639

Whereas here is what test command give you: Enter Image Path: /usr/local/faststorage/chuwang/suncg/prepared_data/yolo_2d_full_trigger/JPEGImages/00082.png: Predicted in 0.016813 seconds. trigger: 100% (left_x: 504 top_y: 650 width: 105 height: 85) Enter Image Path: /usr/local/faststorage/chuwang/suncg/prepared_data/yolo_2d_full_trigger/JPEGImages/00083.png: Predicted in 0.016660 seconds. trigger: 100% (left_x: 524 top_y: 675 width: 139 height: 82) Enter Image Path: /usr/local/faststorage/chuwang/suncg/prepared_data/yolo_2d_full_trigger/JPEGImages/00084.png: Predicted in 0.016925 seconds. trigger: 100% (left_x: 556 top_y: 705 width: 171 height: 76) Enter Image Path: /usr/local/faststorage/chuwang/suncg/prepared_data/yolo_2d_full_trigger/JPEGImages/00085.png: Predicted in 0.017895 seconds. trigger: 100% (left_x: 587 top_y: 746 width: 215 height: 68) Enter Image Path: /usr/local/faststorage/chuwang/suncg/prepared_data/yolo_2d_full_trigger/JPEGImages/00086.png: Predicted in 0.017027 seconds. trigger: 100% (left_x: 641 top_y: 804 width: 260 height: 38) Enter Image Path: /usr/local/faststorage/chuwang/suncg/prepared_data/yolo_2d_full_trigger/JPEGImages/00087.png: Predicted in 0.016829 seconds. trigger: 100% (left_x: 730 top_y: 745 width: 223 height: 72)

Might be useful to someone who is new to yolo like me :)

how to do with custom data?

what the eval=coco should be replaced with?

thanks

peijason commented 4 years ago

You can use this repo: https://github.com/AlexeyAB/darknet

To process a list of images data/train.txt and save results of detection to result.txt use:

./darknet detector test cfg/voc.data yolo-voc.cfg yolo-voc.weights -dont_show -ext_output < data/train.txt > result.txt

Hi, @AlexeyAB ... result.txt is successfully generated. But how to visualise them? I mean, any existing script to read this result.txt? Lazy me.... ^_^

Thank you

b4da commented 4 years ago

@AlexeyAB I have tried running the command you suggested:

"./darknet detector test cfg/voc.data yolo-voc.cfg yolo-voc.weights -dont_show -ext_output < data/train.txt > result.txt"

I made modifications to get it to run with the system I'm using: "./darknet detector test cfg/voc.data cfg/yolo-voc.cfg ../../YOLO-weights/yolov4.weights -dont_show -ext_output < data/train.txt > result.txt"

The result: https://drive.google.com/drive/folders/1O1JtsnLQrh2MNKPGz8Nn76GJ2ej4oXyK?usp=sharing - 'see result.png'

My build: https://drive.google.com/drive/folders/1O1JtsnLQrh2MNKPGz8Nn76GJ2ej4oXyK?usp=sharing -- see 'build.png'

After I run "cat result.txt" the file output is emply.

Thank you for your help, and please let me know what i can do to resolve this error.

uzziiqureshi commented 3 years ago

You can use this repo: https://github.com/AlexeyAB/darknet

To process a list of images data/train.txt and save results of detection to result.txt use:

./darknet detector test cfg/voc.data yolo-voc.cfg yolo-voc.weights -dont_show -ext_output < data/train.txt > result.txt

# Duplicate of #

uzziiqureshi commented 3 years ago

You can use this repo: https://github.com/AlexeyAB/darknet

To process a list of images data/train.txt and save results of detection to result.txt use:

./darknet detector test cfg/voc.data yolo-voc.cfg yolo-voc.weights -dont_show -ext_output < data/train.txt > result.txt

uzziiqureshi commented 3 years ago

Hi, Sir I would like to ask if I could be able to add the total number of detection of the specific class after running all the images at once. How do I get the total number of the detection in the those images. Please help need it for project.

joanna28-web commented 3 years ago

I created a simple command that allows to save all predictions on images from folder test_set: for i in test_set/*.jpg; do ./darknet detector test obj.data yolov3.cfg yolov3_10000.weights "$i" -dont_show; mv predictions.jpg "${i%.jpg}"_det.jpg; done When it is finished executing, you have in the folder test_set images with predictions, whose names start with the same name as of the original image and end with "_det.jpg". You can then move these images to folder "predictions", for example, with mv test_set/*_det.jpg predictions. This code was run on Ubuntu.

uzziiqureshi commented 3 years ago

I created a simple command that allows to save all predictions on images from folder test_set: for i in test_set/*.jpg; do ./darknet detector test obj.data yolov3.cfg yolov3_10000.weights "$i" -dont_show; mv predictions.jpg "${i%.jpg}"_det.jpg; done When it is finished executing, you have in the folder test_set images with predictions, whose names start with the same name as of the original image and end with "_det.jpg". You can then move these images to folder "predictions", for example, with mv test_set/*_det.jpg predictions. This code was run on Ubuntu.

uzziiqureshi commented 3 years ago

Hi, Sir can you please give me the command for windows as I'm running darknet in windows. So what I want is I'm detecting a class called cans once I run the command that you mentioned above for bunch of images it creates the txt file but Sir I would like to go through that txt file and output the total number of cans at the bottom in that txt file can you please please help me with that.

SobiaSaud commented 3 years ago

If we are using tensorflow python for yolov4 we still need txt file?

Alaa-Ibrahim173 commented 3 years ago

How to write Yolov4 detection results of images in .txt format?

aishaalowais commented 3 years ago

I created a simple command that allows to save all predictions on images from folder test_set: for i in test_set/*.jpg; do ./darknet detector test obj.data yolov3.cfg yolov3_10000.weights "$i" -dont_show; mv predictions.jpg "${i%.jpg}"_det.jpg; done When it is finished executing, you have in the folder test_set images with predictions, whose names start with the same name as of the original image and end with "_det.jpg". You can then move these images to folder "predictions", for example, with mv test_set/*_det.jpg predictions. This code was run on Ubuntu.

This is great! Many thanks. How can i add an if statement? Meaning that i only want the code to save the image if it detected an object under class A.

mkarzhaubayeva commented 3 years ago

Hello @joehoeller! Might not be the best timing, however, I'm now struggling with it. Did you figure out how to save predictions in different images? Question for all: I tried the above-mentioned ways to run the darknet on multiple images. But, it either detects only the first image in the .txt file that I provide or saves in the results.txt just the comments in the terminal like 'Loading images ..." - how to make it work properly?

daddydrac commented 3 years ago

See this link here: https://stackabuse.com/writing-files-using-python/

On Tue, Aug 24, 2021 at 1:11 PM Meruyert Karzhaubayeva < @.***> wrote:

Hello @joehoeller https://github.com/joehoeller! Might not be the best timing, however, I'm now struggling with it. Did you figure out how to save predictions in different images? Question for all: I tried the above-mentioned ways to run the darknet on multiple images. But, it either detects only the first image in the .txt file that I provide or saves in the results.txt just the comments in the terminal like 'Loading images ..." - how to make it work properly?

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/pjreddie/darknet/issues/903#issuecomment-904865010, or unsubscribe https://github.com/notifications/unsubscribe-auth/ABHVQHGWCTJKOIAR2B4Z7K3T6POGTANCNFSM4FGTACUA . Triage notifications on the go with GitHub Mobile for iOS https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675 or Android https://play.google.com/store/apps/details?id=com.github.android&utm_campaign=notification-email .

tomayoteebroo commented 2 years ago

after you get the output.txt, you can process your own bboxes. pseudo code (python opencv):

  1. clean the output text file, then format/dump/write into json file (personal preference) so that you can read the json file in other script or whatever application. replace 'license_plate' with your objects, if dealing with multiple objects, you need to write your own logic to filter until the next image.
    
    import cv2
    import time
    import json

f = open('output.txt', 'r') lines = f.readlines() imgdict = {} imgnames = [] paths = [] for index, line in enumerate(lines): if '/' in line: path = line path = path.split(':') imgdict[path[0]] = [] names = path[0].split('/') for n in names: if 'jpg' in n: imgdict[path[0]] = {'name': n} pass if 'license_plate' in lines[index+1]: for i in range(1,5): if 'license_plate' in lines[index+i]: lp = lines[index+i].split() score = lp[1] left_x = lp[3] top_y = lp[5] width = lp[7] height = lp[9] height = height.strip(')') license_plate = [score, left_x, top_y, width, height] imgdict[path[0]]['license_plate'] = license_plate pass else:

print(f'nomore: {lines[index+i]}')

                break
    else:
        print(f'this image has no detected lp')
        imgdict[path[0]]['license_plate'] = [0, 0, 0, 0, 0]

with open('output.json', 'w') as f: json.dump(imgdict, f, ensure_ascii=False, indent=4)


2. draw bounding boxes after read from the exported json file earlier. 

import cv2 import time import json

f = open('output.json')

data = json.load(f)

print(f'{type(data)}')

FN = 0 TP = 0 for d in data: img = cv2.imread(f'{d}') imgname = data[d]['name'] lp = data[d]['license_plate'] score = lp[0] box = [int(lp[1]),int(lp[2]),int(lp[3]),int(lp[4])] print(f'{imgname} {lp} {score} {box}') if box[2] == 0: FN+=1 cv2.imwrite(f'drawbboxes/{imgname}', img) cv2.imwrite(f'drawbboxes/FN/{imgname}', img) pass else: TP+=1 cv2.rectangle(img, (box[0], box[1]), (box[0]+box[2], box[1]+box[3]), color=(0,255,0), thickness=2) text = f'license_plate: {score}' cv2.putText(img, text, (box[0],box[1]-5), cv2.FONT_HERSHEY_SIMPLEX, 1, color=(0,255,0), thickness=2) cv2.imwrite(f'drawbboxes/{imgname}', img) cv2.imwrite(f'drawbboxes/TP/{imgname}', img)

# cv2.imshow('img', img)
# cv2.waitKey(0)
# cv2.destroyAllWindows()
# print(f'{d}')
# print(f'{type(d)}')
# print(f'{imgname}')
pass

print(f'TP: {TP}, FN: {FN}')

bertinma commented 2 years ago

@AlexeyAB the following command ./darknet detector test cfg/obj.data cfg/yolov3-tiny.cfg backup/yolov3-tiny_best.weights -dont_show -ext_output < data/train.txt > result.txt gives me only one prediction instead of having 20 images in the file valid.txt

What I'm doing wrong ?

I'm using yolov3 tiny on Google Colab.

Thanks for your help ;)

renuka-alai1210 commented 2 years ago

./darknet detector test cfg/obj.data cfg/yolov3-tiny.cfg backup/yolov3-tiny_best.weights -dont_show -ext_output < data/train.txt > result.txt
I tried the above-mentioned ways to run the darknet on multiple images. but how to crop all the images in a specific class_name.

Thanks for your help

tianlanlanlan commented 2 years ago

i try to execute ./darknet detector test data/train-second/obj.data cfg/yolov4-custom-3.cfg backup/yolov4-custom-3_best.weights -dont_show -ext_output /home/tianlan/Downloads/darknet/data/train-second/test.txt result.txt and got a error Cannot load image /home/tianlan/Downloads/darknet/data/train-second/test.txt

if you have a test.txt full of test set images full path you can try this way, it worked for me https://github.com/Cartucho/mAP/issues/32#issuecomment-1099291250

kyokeen commented 2 years ago

If you want to save the labels as individual .txt files, you can use the -save_labels flag on the commands above (./darknet detector test ... etc)

aishaalowais commented 2 years ago

Hi all, I want to test on a full directory of images, meaning that i want the output to be all images with bounding boxes on each object in one image.. (NOT a file)

This command below is amazing but it doesnt seem to work on google colab. any help?? @joanna28-web

for i in test_set/*.jpg; do ./darknet detector test obj.data yolov3.cfg yolov3_10000.weights "$i" -dont_show; mv predictions.jpg "${i%.jpg}"_det.jpg; done <- doesnt work for me on google colab

DikshitV commented 2 years ago

You can use this repo: https://github.com/AlexeyAB/darknet To process a list of images data/train.txt and save results of detection to result.txt use: ./darknet detector test cfg/voc.data yolo-voc.cfg yolo-voc.weights -dont_show -ext_output < data/train.txt > result.txt

I tried this and got "Cannot load image "-dont_show"" as an error.

I also had the same error. Try to reinstall darknet one more time and run make file again. It helped me.

Hi, even after reinstall darknet, i am getting "Cannot load image". !./darknet detector test data/obj.data cfg/yolov4-tiny-custom.cfg /mydrive/yolov4-tiny/training/yolov4-tiny-custom_best.weights -dont_show -ext_output < data/train.txt > testing2.txt

Result-Error: Cannot load image data/obj/frame_001369.PNG

But If I run the same on single image, I get the output: !./darknet detector test data/obj.data cfg/yolov4-tiny-custom.cfg /mydrive/yolov4-tiny/training/yolov4-tiny-custom_best.weights ./data/obj/frame_001369.PNG -thresh 0.3

Result- ./data/obj/frame_001369.PNG: Predicted in 4.960000 milli-seconds. streck: 79%

Same path, but its working for single image but not series of images. Request for assist.

stefaniftime commented 2 years ago

Hi all, I want to test on a full directory of images, meaning that i want the output to be all images with bounding boxes on each object in one image.. (NOT a file)

This command below is amazing but it doesnt seem to work on google colab. any help?? @joanna28-web

for i in test_set/*.jpg; do ./darknet detector test obj.data yolov3.cfg yolov3_10000.weights "$i" -dont_show; mv predictions.jpg "${i%.jpg}"_det.jpg; done <- doesnt work for me on google colab

Hello, i don't know if this is of any help for you anymore, but it would maybe be for others working in colab, I wrote some code that runs detection on all the images in the test set and saves them to your drive in a detection folder with the appropriate filenames. Good luck !

https://github.com/stefaniftime/yolo/blob/main/detections.py

NpLiquid commented 2 years ago

Hi everyone,

I am using YoloV4 and to make inferences over multiple images I followed the following answer from @AlexeyAB :

You can use this repo: https://github.com/AlexeyAB/darknet

To process a list of images data/train.txt and save results of detection to result.txt use:

./darknet detector test cfg/voc.data yolo-voc.cfg yolo-voc.weights -dont_show -ext_output < data/train.txt > result.txt

However, I found that in the output file I have multiple negative values for the bounding boxes. Below is an example of the output file. When trying to display inferences with negative values in openCV, the bounding box does not appear. However, when using the command darknet.exe detector test /build/darknet/x64/data/obj.data /cfg/yolo-obj-test_conf2.cfg /build/darknet/x64/backup/yolo-obj_final.weights C: /tests/instances_val/123456.jpg the bounding boxes are displayed.

Can you help me understand where is my mistake?

net.optimized_memory = 0 
mini_batch = 1, batch = 1, time_steps = 1, train = 0 
Create CUDA-stream - 0 
 Create cudnn-handle 0 
nms_kind: greedynms (1), beta = 0.600000 
nms_kind: greedynms (1), beta = 0.600000 
nms_kind: greedynms (1), beta = 0.600000 

 seen 64, trained: 352 K-images (5 Kilo-batches_64) 
Enter Image Path:  Detection layer: 139 - type = 28 
 Detection layer: 150 - type = 28 
 Detection layer: 161 - type = 28 
C:/tests/instances_val/12354.jpg: Predicted in 278.776000 milli-seconds.
Dog: 89%    (left_x: 1717   top_y:  267   width: 3236   height:  841)
Enter Image Path:  Detection layer: 139 - type = 28 
 Detection layer: 150 - type = 28 
 Detection layer: 161 - type = 28 
C:/tests/instances_val/123455.jpg: Predicted in 51.761000 milli-seconds.
Enter Image Path:  Detection layer: 139 - type = 28 
 Detection layer: 150 - type = 28 
 Detection layer: 161 - type = 28 
C:/tests/instances_val/123456.jpg: Predicted in 52.247000 milli-seconds.
Cat: 91%    (left_x: -105   top_y: -139   width: 1347   height: 1149)
Enter Image Path:
HarveyLijh commented 1 year ago

Hi@NpLiquid, I'm encountering the exact issue. Did you manage to resolve this? Would you please share with me how if you did?