pjreddie / darknet

Convolutional Neural Networks
http://pjreddie.com/darknet/
Other
25.7k stars 21.33k forks source link

Bounding box coordinates #183

Open MansourTrabelsi opened 7 years ago

MansourTrabelsi commented 7 years ago

After downloading YOLO and running it by typing

./darknet detect cfg/yolo.cfg yolo.weights data/dog.jpg I get a new picture which contains a bounding box.

So how can I know the coordinates of that bounding box?

stanfordism commented 7 years ago

https://stackoverflow.com/questions/44544471/how-to-get-the-coordinates-of-the-bounding-box-in-yolo-object-detection/44592380#44592380

DoctorVoid commented 7 years ago

@stanfordism Hey, the link you provided is broken for me. Does it referrers to a deleted comment on this page?

MansourTrabelsi commented 7 years ago

if(left < 0) left = 0; if(right > im.w-1) right = im.w-1; if(top < 0) top = 0; if(bot > im.h-1) bot = im.h-1; **printf("Bounding Box: Left=%d, Top=%d, Right=%d, Bottom=%d\n", left, top, right, bot);** Add this line in that place (line 232) in image.c, after the that if statement and make again your darknet again.

kilometer0101 commented 6 years ago

I would like to export the bouding box info to a txt file. so, when use fprintf function, how can i define the output file name? without the name, now, i have "Segmentation fault (core dumped)" error message.

TheMikeyR commented 6 years ago

@Kilometer0101 https://stackoverflow.com/questions/11573974/write-to-txt-file

praz2202 commented 6 years ago

@MansourTrabelsi How can I crop the image using these bounding box cordinates. Someone proposed get ROI and crop it. Can you please help me out on this?

saurabhhssaurabh commented 6 years ago

@MansourTrabelsi Can u tell me how did u get new picture containing bounding boxes? I ran same command but did not get any output picture.

Isha8 commented 6 years ago

How to crop and save the resulting bounding boxes as separate image files from a video feed? Thanks

AlexeyAB commented 6 years ago

@Isha8

Use this repository: https://github.com/AlexeyAB/darknet And do this: https://github.com/AlexeyAB/darknet/issues/954#issuecomment-393609188

Isha8 commented 6 years ago

@AlexeyAB Already using that repository. It works. Thank you :)

pyradd commented 6 years ago

@MansourTrabelsi Thank you for your help. I would like to write the bounding box coordinates along with detected object name and the confidence value as a text file and output it along with predictiions.png. Can anyone help me in this matter?

PeterQuinn925 commented 6 years ago

Have you looked at AlexyAB's readme:

To process a list of images data/train.txt and save results of detection to result.txt use: darknet.exe detector test data/voc.data yolo-voc.cfg yolo-voc.weights -dont_show -ext_output < data/train.txt > result.txt

If this isn't what you're after, you can write a python program to call darknet, get the output, and do whatever else you want with it.

pyradd commented 6 years ago

I am right now working with yolo9000. I will try that but don't think that might work

thriskel commented 6 years ago

@PeterQuinn925 did you find a way to write in a text file the object's name?

PeterQuinn925 commented 6 years ago

@thriskel I'm reading the output and sorting photos based on the results, not writing the text to a file, but you could repurpose my code to do it. https://github.com/PeterQuinn925/Squirrel/blob/master/sort_photos.py

Google for 'python write to a text file' and you'll find a lot of good sources like this one: https://www.pythonforbeginners.com/files/reading-and-writing-files-in-python

thriskel commented 6 years ago

@PeterQuinn925 I edited the image.c file so it writes the bounding box coords to a txt file but I don't seem to find a way to do the same to the object's name but it seems that you have a cleaner way to do it in your code.

prdas31 commented 6 years ago

Hi,

I've trained my own objects of 9 classes with 608x608 for 3k iterations using the tiny-yolo3 model and using applicable set of anchors like:

anchors = 62.7566,107.5366, 103.4983,170.8904, 151.9631,228.4383, 196.9127,327.1863, 266.8911,253.1499, 349.3929,351.8809

Now if I run the following command with threshold 0.25, it doesn't detect anything:

darknet.exe detector demo data/obj.data yolov3-tiny-obj.cfg backup/yolov3-tiny-obj_3000.weights -i 0 -thresh 0.25 -ext_output VID-20180822-WA0002.mp4

But if I change the threshold to 0.005 or 0.001, then it shows a lot of quickly passing boxes, but the bounding boxes are not bound to the target objects.

What could be the reason for this? I had around 150 images per 9 classes while training and after 3k iterations the avg loss came down to 0.57.

Kindly advice.

Thanks.

nwthilina commented 5 years ago

How to crop and save the resulting bounding boxes as separate image files from a image feed? Thanks

nwthilina commented 5 years ago

How to crop and save the resulting bounding boxes as separate image files from a image feed? Thanks

Found the Solution add these line after 448

             image c1 = crop_image(im, left, top, right-left, bot-top);
         save_image(c1, "99");
daylanc commented 5 years ago

@thriskel How did you edit the image.c file output the bounding box coordinates? Did you figure out a way to also output the object's name?

thriskel commented 5 years ago

@daylanc yes, I'll post the file when can, but if I'm not missing memories I believe my solution for the object was reading the output of the darknet command and filtering it with conditionals

devpranoy commented 5 years ago

If anyone wants to extract all detected objects as an image modify the image.c file and add

char str[10];
image c1 = crop_image(im, left, top, right-left, bot-top);
sprintf(str, "%d", i);
save_image(c1,str);

before the draw_box_width() function. If you get unwanted bounding boxes in the extracted images comment out the draw_box_width() function.

Montasir007 commented 5 years ago

@devpranoy and @nwthilina those modifications doesn't help me to extract object from an image. Can you guys please help me out with further details. Thanks.

devpranoy commented 5 years ago

@Montasir007 goto images.c file in src folder, goto line 239 where the function draw_detections is defined, scroll down, you'll see an draw_box_width function above that function paste my code sample above and after that comment out the draw_box_width function entirely save the image.c file and then open terminal, "make" the code again and then try running darknet, the detected objects will be extracted as a separate image file.

Montasir007 commented 5 years ago

@devpranoy I have done whatever you said. But the detected object doesn't extract and save as separate image file. Can you please help me out? Btw where that image will be save?

devpranoy commented 5 years ago

@Montasir007 the images will appear in the darknet folder itself like how the predictions.jpg is obtained. Make sure that you're using the darknet repo of https://github.com/pjreddie/darknet and not its forks. If nothing works still, use my image.c file and copy it to the src folder and run the make command on the terminal again.

Montasir007 commented 5 years ago

@devpranoy now it works. Thanks for the help man.

navan0 commented 5 years ago

@devpranoy hey man i'm getting prediction.png with bouding box that doesn't have it's label. i've tried the same on a jupyter notebook and it's executing without errors.

sakshipandita02 commented 5 years ago

@Isha8

Use this repository: https://github.com/AlexeyAB/darknet And do this: AlexeyAB#954 (comment)

@AlexeyAB Already using that repository. It works. Thank you :)

can u pls help me with the code

ABHINAV20 commented 5 years ago

@AlexeyAB How can I save the bounding box as a separate image ? I am not detecting on a video but just on an image

mehio commented 5 years ago

Hey guys I tried to change to image.c and then i did the "make" darknet again but its not outputting the coordinates. Its only outputting the seconds it took detect and the accuracy and the image with a bounding box. Please help

htngo23 commented 4 years ago

@mehio I am having the same issue, did you ever get this to work?

mehio commented 4 years ago

@htngo23 i was using the alexeyab version, not pjreddie. Thats why it ddnt work. If you are using alexeyab then you do not need to modify any source files. As for the pjreddie version im not sure. Also im not sure if there are any differences between the two versions but thats how it worked out for me.

htngo23 commented 4 years ago

@mehio spot on. It took me a bit to figure out the syntax to achieve what I wanted but here it is:

!./darknet detect cfg/yolov3.cfg yolov3.weights -ext_output pathToYourImage.image.jpg result.txt

pandykad commented 4 years ago

@htngo23 i was using the alexeyab version, not pjreddie. Thats why it ddnt work. If you are using alexeyab then you do not need to modify any source files. As for the pjreddie version im not sure. Also im not sure if there are any differences between the two versions but thats how it worked out for me.

@mehio Please can you tell, then how did you made it work i.e output the coordinates on alexeyab version. Thanks for the help !

hbiserinska commented 4 years ago

@htngo23 @pandykad I hope you can help me out with similar issue. I successfully trained my model using Alexeyab repository and pre-trained weights from pjreddie website (darknet53.conv.74). Now I am trying to apply the model on test data and get the results of the bbox and the class. I am using: !./darknet detector test cfg/obj.data cfg/yolo-obj.cfg yolo-obj_final.weights -dont_show -ext_output <data/test.txt> result.txt It gives me a result.txt file that contains only the following text for each image (Enter Image Path: data/obj/image.jpg : Predicted in 37.225000 milli-seconds.) and not the desired output. Do you have any idea what I can try to solve this?

balajib363 commented 4 years ago

@hbiserinska Try to execute the below command to check inference on images from trained model ./darknet detector test data/obj.data cfg/yolov4-custom-obj.cfg backup/yolov4-custom-obj_best.weights

larrywal-express commented 4 years ago

Can someone help with image.c that has being edited to detect circle box instead of rectangle box..

dibyasom commented 4 years ago

I edited the image.c file and compiled the darknet again, but the changes weren't seen. I was actually trying to print the Bounding-Box length and width along with the "Class Name of the object". Thank you in advance, I'm actually very new to object detection.

cyrus303 commented 4 years ago

Hi,

I've trained my own objects of 9 classes with 608x608 for 3k iterations using the tiny-yolo3 model and using applicable set of anchors like:

anchors = 62.7566,107.5366, 103.4983,170.8904, 151.9631,228.4383, 196.9127,327.1863, 266.8911,253.1499, 349.3929,351.8809

Now if I run the following command with threshold 0.25, it doesn't detect anything:

darknet.exe detector demo data/obj.data yolov3-tiny-obj.cfg backup/yolov3-tiny-obj_3000.weights -i 0 -thresh 0.25 -ext_output VID-20180822-WA0002.mp4

But if I change the threshold to 0.005 or 0.001, then it shows a lot of quickly passing boxes, but the bounding boxes are not bound to the target objects.

What could be the reason for this? I had around 150 images per 9 classes while training and after 3k iterations the avg loss came down to 0.57.

Kindly advice.

Thanks.

i had the same problem and my annotations file was the culprit. Are you still facing this issue ??

ShimJuan commented 4 years ago

Hi i study yolov3 in darknet and i want change detection box color. and i want to know how. like i designate class with my own color. can you teach me how? thanks :)

kishore-jd commented 3 years ago

After downloading YOLO and running it by typing

./darknet detect cfg/yolo.cfg yolo.weights data/dog.jpg I get a new picture which contains a bounding box.

So how can I know the coordinates of that bounding box?

me also the same problem anyone can help me, please

kishore-jd commented 3 years ago

@stanfordism Hey, the link you provided is broken for me. Does it referrers to a deleted comment on this page?

yes

kishore-jd commented 3 years ago

@MansourTrabelsi Thank you for your help. I would like to write the bounding box coordinates along with detected object name and the confidence value as a text file and output it along with predictiions.png. Can anyone help me in this matter?

bro I want to same

dheeraj755148 commented 3 years ago

if(left < 0) left = 0; if(right > im.w-1) right = im.w-1; if(top < 0) top = 0; if(bot > im.h-1) bot = im.h-1; **printf("Bounding Box: Left=%d, Top=%d, Right=%d, Bottom=%d\n", left, top, right, bot);** Add this line in that place (line 232) in image.c, after the that if statement and make again your darknet again.

This is not working. Can you guide me?

akoustic commented 3 years ago

@mehio spot on. It took me a bit to figure out the syntax to achieve what I wanted but here it is:

!./darknet detect cfg/yolov3.cfg yolov3.weights -ext_output pathToYourImage.image.jpg result.txt

There is a small change in the command: !./darknet detect cfg/custom-yolov4-detector.cfg backup/custom-yolov4-detector_best.weights {img_path} -ext_output > resultbbox.txt where predictions.jpg saves the image and resultbbox.txt (present in darknet folder) saves the time taken to infer, coordinates and the confidence like this tax: 92% (left_x: 26 top_y: 829 width: 552 height: 93)

This works on any implementation of darknet for yolov4

AlexVizgard commented 1 year ago

We're trying to train a Yolov7 model but we get the error below. I think many of our images have a large bounding box that partially goes out of frame. Q1: Is the whole image ignored when this happens or just the large bounding box? Is there a way to correct/force the bounding box back into the bounds of the image? I'd imagine our inference performance will be pretty bad for large objects if these annotations are just removed. val: WARNING: Ignoring corrupted image and/or label /home/images/valid/N127.jpg: non-normalized or out of bounds coordinate labels