qqwweee / keras-yolo3

A Keras implementation of YOLOv3 (Tensorflow backend)
MIT License
7.14k stars 3.44k forks source link

Getting Negative Bounding Boxes Values #747

Open robertokcanale opened 3 years ago

robertokcanale commented 3 years ago

Hi, I am training my own Keras model to use YOLO3 for inference, but i get negative and wierd bounding boxes, if any at all. I am trying to segment a hand-pressure image into PALM THUMB FINGERS (0,1,2 respectively for the classes). I follow all the steps for custom training (with a small dataset of 200 images) but i get the following error: Found 1 boxes for img [ -49.221752 -130.95093 -45.08555 188.86354 ] 0.31854975

Or better, I get negative positions for my bounding boxes, why is that?

I attach a photo of my hand, its annotation in .txt, the whole train file for my script, my classes, and the image I get out of it. Some of the code was taken from: [https://pylessons.com/YOLOv3-custom-training/]

1 1.txt my_classes.txt train.txt 155 png_screenshot_12 03 2021

Anyman552 commented 3 years ago

How do you convert from darknet-format to the format of keras-yolo3 (x_min,y_min,x_max,y_max,class_id) ? Have you checked if this conversion doesn't mess up anything? In the beginning I had similar problems because of that reason.

robertokcanale commented 3 years ago

This is what i get for image 11 in labelImg and after my conversion. I use the following script for converting ![

Screenshot from 2021-03-15 11-07-02](https://user-images.githubusercontent.com/58591956/111137107-b68d0200-857e-11eb-83f2-b5d5d60b22ee.png)

robertokcanale commented 3 years ago

import os import cv2

changed for multiple classes

def prep_train_txt(): width = 362 height = 247 hands = os.path.join(os.getcwd(),"hands") file_object = open('train.txt', 'w')
i=0 for img in os.listdir(hands): if img[-3:] == 'txt': nameee = img[:-3] + "png" img_name = os.path.join("hands" , nameee) img_name = cv2.imread(img_name) file_object.write("hands/" + nameee + " " ) print(img)

initialize text to write

            text=[]
            with open(os.path.join(hands,img)) as f:
                lines = f.readlines()
                print(len(text), type(text), text)
                #print(lines[0], type(lines))
                for i in range(len(lines)):
                    ans = lines[i]
                    line = ans.split(" ")
                    #print(line[4], type(line))
                    #Getting the x and y position of the bounding boxes, assuming these computations are correct, im not so sure
                    x1 , y1 , w , h = float(line[1]),float(line[2]),float(line[3]) ,float(line[4][:-1])
                    x1 , y1 , w , h = x1 - w/2 , y1-h/2 , w , h
                    x1 , y1 , w , h = int(x1*width), int(y1*height), int(w*width),int(h*height)
                    text.append(str(x1) +  "," +  str(y1) + "," + str(x1+w) + "," + str(y1+h) + "," + str(int(line[0])) + " ")                    
                    file_object.write(text[i])

            file_object.write("\n")

prep_train_txt()

robertokcanale commented 3 years ago

I also get my BB from https://github.com/pythonlessons/YOLOv3-object-detection-tutorial/tree/master/YOLOv3-custom-training

Anyman552 commented 3 years ago

The coordinates are looking good. Do you provide the right path to the annotation_path (https://github.com/qqwweee/keras-yolo3/blob/e6598d13c703029b2686bc2eb8d5c09badf42992/train.py#L17) ?

Have you edited yolov3.cfg to your needs? Look at : https://github.com/AlexeyAB/darknet#how-to-train-to-detect-your-custom-objects

However, the code is working for me. I use a bigger dataset, but I don't think, that this should matter.

Wsine commented 2 years ago

@robertokcanale I meet this problem today. have you figure where is the problem?

robertokcanale commented 2 years ago

Hi, no, I just switched to yolov5. I highly suggest to change.

On Thu, 14 Oct 2021, 11:19 Jankin Wei, @.***> wrote:

@robertokcanale https://github.com/robertokcanale I meet this problem today. have you figure where is the problem?

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/qqwweee/keras-yolo3/issues/747#issuecomment-943173236, or unsubscribe https://github.com/notifications/unsubscribe-auth/AN7AVVC7DGFGXRJEZ5QJJVTUG2OAZANCNFSM4ZCTIKHQ . Triage notifications on the go with GitHub Mobile for iOS https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675 or Android https://play.google.com/store/apps/details?id=com.github.android&referrer=utm_campaign%3Dnotification-email%26utm_medium%3Demail%26utm_source%3Dgithub.