balancap / SSD-Tensorflow

Single Shot MultiBox Detector in TensorFlow
4.11k stars 1.89k forks source link

The model is trained normally, but when tested in the notebook ,there are no results. #303

Open Gloria1217 opened 5 years ago

Gloria1217 commented 5 years ago

The model is trained with my own dataset which has two categories. The loss is about 2. But when the model is tested in the notebook, there are no errors, and no results. There are no detect boxes on the test image. Is there any similar situation which has been solved at last ?

speculaas commented 5 years ago

Hi Gloria, you can try lower the confidence value,

for example :

default is 0.5 (you can see it in the function def) rclasses, rscores, rbboxes = process_image(img)

I change it to 0.2 : rclasses, rscores, rbboxes = process_image(img, select_threshold=0.2)

and my custom trained model's become visible

BR, JimmyYS

Gloria1217 commented 5 years ago

@speculaas Thanks a lot for your reply. I changed the select_threshold followed your idea, but I still can't get any bounding boxes.

speculaas commented 5 years ago

Hi Gloria, You can also try fine tuning? But I actually stuck at no detected boxes too when I try to train SSD to detect only faces (one class).

There is some discussion here, but there seems no conlusion, solution yet about why training with "one class" does not converge.

I gave up after training for several days on 1080Ti :

INFO:tensorflow:global step 792030: loss = 1.3387 (0.315 sec/step)

I guess maybe there is something wrong with the loss function when there is only one class. But I have some other urgent business. Do not know whether I can get back to exploring Balancap's SSD .

By "maybe something wrong with loss function" , I mean if for example you choose to train vgg300:

SSD-Tensorflow\nets\ssd_vgg_300.py def ssd_losses(logits, localisations,

    # Final negative mask.
    nmask = tf.logical_and(nmask, nvalues < max_hard_pred)
    fnmask = tf.cast(nmask, dtype)

    # Add cross-entropy loss.
    with tf.name_scope('cross_entropy_pos'):
        loss = tf.nn.sparse_softmax_cross_entropy_with_logits(logits=logits,
                                                              labels=gclasses)
        loss = tf.div(tf.reduce_sum(loss * fpmask), batch_size, name='value')
        tf.losses.add_loss(loss)

BTW, why the author says "And concat the crap!" ? Deep learning's numpy stuff driving people mad?

BR, Jimmy