Open wysxhssy opened 6 years ago
I'm still trying to figure out why it's done this way, but it looks like the last layer of the model actually computes the loss function. You can see this in create_model() - the model_loss is the output of model.
So it looks like the fit is trying to make the output (i.e. the loss) zero.
I suspect the training target of np.zeros(...) is used entirely as an argument dummy placeholder in order to be able to call model.fit(...) The target (or y_true) is not being used to compute the loss since y_pred itself is the loss. The ground truth (the real "target") is sort of packaged into the input list. This is a hack he had mentioned in code comment and I wonder if this can be done without this hack.
In the retrain_yolo.py, I noticed that the training label inputted in the model.fit() is an array of zeros. I will be appreciated if anyone can give me some explanation what the zeros mean here and why we train the model in this way?