Open tailongnguyen opened 6 years ago
I got same problem as you, if you change the penalize factors, your predictions can improve:
I changed this:
"no_object_scale": 1.0, # determine how much to penalize wrong prediction of confidence of non-object predictors
to this:
"no_object_scale": 3.0, # determine how much to penalize wrong prediction of confidence of non-object predictors
in order to take down the false positives, my model also improved with a wide layers (more filters per layer)
Also I got a little bit more juice training on grayscale and keep the same proportion ratio between width and height from original image shape. You can try it too, you can check my fork here
@rodrigo2019 thanks for relying me. What do you think about my side question?
Hi, thanks for your great work. I'm using this to train on my own dataset, just one class for now. My dataset contains more than 3000 images with about 10000 objects. However, I see that the false positive rate is pretty high and want to reduce it. One possible solution I came up with is to add more negative data, including the images that containing no objects (code modification needed to make the model learn on these images). What do you think about it? Does it really help? Side question: if I intentionally do not label some objects in some images being trained, will the model be hurted?
@tailongnguyen I think it's a good way. Does it work?
@jzx-gooner I added more negative data (sampled from the set on which the model performed poorly and gave high false positive rate) and it did help reducing the false positive rate.
@tailongnguyen cool! I will try this,thank you very much
I think wont work, but I'm not sure in this line the object is included if the label are in the label list set in json file, and in this line, the image is only included if has any valid label on it
@tailongnguyen As @rodrigo2019 said,I could not find useful method to use the negative data?what did you do to feed the negative dataset? Thank you!
@jzx-gooner You can feed the negative data by changing these lines: https://github.com/experiencor/keras-yolo2/blob/4e8c85ce02435f136d4f4cfe930b4ccb759fbaf8/preprocessing.py#L55-L57
The simplest way is to replace 0 with -1.
@jzx-gooner You can feed the negative data by changing these lines: keras-yolo2/preprocessing.py
Lines 55 to 57 in 4e8c85c
if len(img['object']) > 0: all_imgs += [img]
The simplest way is to replace 0 with -1.
Thank you for you reply.
I got this error when i changing
Traceback (most recent call last): File "train.py", line 100, in <module> _main_(args) File "train.py", line 96, in _main_ debug = config['train']['debug']) File "/home/jzx/keras-yolo2/frontend.py", line 341, in train average_precisions = self.evaluate(valid_generator) File "/home/jzx/keras-yolo2/frontend.py", line 400, in evaluate all_annotations[i][label] = annotations[annotations[:, 4] == label, :4].copy() IndexError: index 4 is out of bounds for axis 1 with size 0
Hi, thanks for your great work. I'm using this to train on my own dataset, just one class for now. My dataset contains more than 3000 images with about 10000 objects. However, I see that the false positive rate is pretty high and want to reduce it. One possible solution I came up with is to add more negative data, including the images that containing no objects (code modification needed to make the model learn on these images). What do you think about it? Does it really help? Side question: if I intentionally do not label some objects in some images being trained, will the model be hurted?