experiencor / keras-yolo2

Easy training on custom dataset. Various backends (MobileNet and SqueezeNet) supported. A YOLO demo to detect raccoon run entirely in brower is accessible at https://git.io/vF7vI (not on Windows).
MIT License
1.73k stars 785 forks source link

Anchor box #364

Open SteveIb opened 5 years ago

SteveIb commented 5 years ago

Hi, Thanks for providing such helpful project. I have images of 2 class i extracted the images of the objects, then i created the XML annotations the images are 68*68 and the object is around 2/3 of the image. each image contain one object only. I tried to generate the anchor box as written in the readme python gen_anchors.py -c config.json i got the following

iteration 1: dists = 29477.4793388 iteration 2: dists = 224918.064881 ('\naverage IOU for', 5, 'anchors:', '0.81') anchors: [0.00,0.00, 0.00,0.00, 0.00,0.00, 0.00,0.00, 1.27,1.27]

when i run the training the loss is : nan

my config file is { "model" : { "backend": "Full Yolo", "input_size": 70, "anchors": [0.00,0.00, 0.00,0.00, 0.00,0.00, 0.00,0.00, 1.27,1.27], "max_box_per_image": 10,
"labels": ["positive", "negative"] },

"train": {
    "train_image_folder":   "/home/*****",
    "train_annot_folder":   "/home/*****",     

    "train_times":          8,
    "pretrained_weights":   "",
    "batch_size":           16,
    "learning_rate":        1e-4,
    "nb_epochs":            1,
    "warmup_epochs":        3,

    "object_scale":         5.0 ,
    "no_object_scale":      1.0,
    "coord_scale":          1.0,
    "class_scale":          1.0,

    "saved_weights_name":   "new70.h5",
    "debug":                false
},

"valid": {
    "valid_image_folder":   "",
    "valid_annot_folder":   "",

    "valid_times":          1
}

}

When I run with training with(default setting like in raccon dataset and their anchors) "model" : { "architecture": "Full Yolo", # "Tiny Yolo" or "Full Yolo" or "MobileNet" or "SqueezeNet" or "Inception3" "input_size": 416, "anchors": [0.57273, 0.677385, 1.87446, 2.06253, 3.33843, 5.47434, 7.88282, 3.52778, 9.77052, 9.16828], "max_box_per_image": 10,
"labels": ["positive","negative"] },

the training loss start with 30 and goes down to 0.06774

Epoch 00004: val_loss improved from 10.03461 to 0.06774, saving model to new68.h24080/24080 [==============================] - 3454s 143ms/step - loss: 0.1259 - val_loss: 0.0677 /home/**/Yolo/keras-yolo2/utils.py:198: RuntimeWarning: overflow encountered in exp return 1. / (1. + np.exp(-x)) (u'positive', '0.9393') (u'negative', '0.9911') mAP: 0.9652

When I try to find an image with let say 1000*1000 size it gives me wrong results like just to objects while it has a lot of objects in the image.

Any hint?

rodrigo2019 commented 5 years ago

try generating anchor using this file, you should get more accurate anchors, and you will not get anchor with 0.00 values

SteveIb commented 5 years ago

I did using this file as written in readme file

SteveIb commented 5 years ago

Sorry I closed it by mistake

rodrigo2019 commented 5 years ago

No, this file is from my fork and it is a bit different

Em 23 de set de 2018 2:00 PM, SteveIb notifications@github.com escreveu:

I did using this file as written in readme file

— You are receiving this because you commented. Reply to this email directly, view it on GitHubhttps://github.com/experiencor/keras-yolo2/issues/364#issuecomment-423831025, or mute the threadhttps://github.com/notifications/unsubscribe-auth/AHpSiMYLsyqcSB2MInu5Iv03r5kgy33Bks5ud75HgaJpZM4W1oKK.

SteveIb commented 5 years ago

I run it using your code

totalMemory: 11.91GiB freeMemory: 11.59GiB 2018-09-24 09:52:32.242241: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1120] Creating TensorFlow device (/device:GPU:0) -> (device: 0, name: TITAN X (Pascal), pci bus id: 0000:01:00.0, compute capability: 6.1) iteration 1: dists = 29477.4793388 iteration 2: dists = 224918.064881 ('\naverage IOU for', 5, 'anchors:', '0.81') anchors: [0.00000,0.00000, 0.00000,0.00000, 0.00000,0.00000, 0.00000,0.00000, 21.55931,21.55931]

SteveIb commented 5 years ago

my config file: { "model" : { "backend": "Full Yolo", "input_size_w": 68, "input_size_h": 68, "gray_mode": false, "anchors": [0.8223, 0.8223, 1.87446, 2.06253, 3.33843, 5.47434, 7.88282, 3.52778, 9.77052, 9.16828], "max_box_per_image": 10,
"labels": ["positive", "negative"] }, "parser_annotation_type": "xml", "train": { "train_image_folder": "****/ALL/", "train_annot_folder": "****/annotations/",

    "train_times":          8,
    "pretrained_weights":   "",
    "batch_size":           4,
    "learning_rate":        1e-4,
    "nb_epochs":            1,
    "warmup_epochs":        3,

    "object_scale":         5.0 ,
    "no_object_scale":      1.0,
    "coord_scale":          1.0,
    "class_scale":          1.0,

    "saved_weights_name":   "new200.h5",
    "debug":                false
},

"valid": {
    "valid_image_folder":   "",
    "valid_annot_folder":   "",

    "valid_times":          1
}

}

robertlugg commented 5 years ago

Try to modify your anchors to only include those last two numbers (ie remove all the zeros).

tanakataiki commented 5 years ago

@rodrigo2019 I try to use your repo but anchor size is much bigger than darknet yolo use. for example input w and h =320 VOC 2007+2012 if change from grid_w = config['model']['input_size_w']/feature extractor shape anchors: [2.53669,4.32856, 6.06793,11.47580, 10.58923,22.02669, 18.80631,12.86006, 24.54766,26.52503] to grid_w = config['model']['input_size_w']/32 anchors: [0.79098,1.34649, 1.88846,3.57693, 3.30405,6.87123, 5.87341,4.01524, 7.66691,8.28823] anchor size become relatively acceptable

is there any reason to make anchor such big?

rodrigo2019 commented 5 years ago

@tanakataiki I will take a look in the code.