Closed sparrow0629 closed 8 years ago
Hi, You can view as some sort of overfitting, but I think the main reason is because 12-net is too small/shallow to be discriminative, that is why the cascade is needed. To achieve adequate results during testing using a single net, you would need to adopt a larger/deeper model.
Thanks, by the way, I found you set the thresholds for 12net and 24net are 0.05 and 48net 0.3. It is necessary to set the threshold so small?
Quoting the original paper : "We then apply a 2-stage cascade consists of the 12-net and 12-calibration-net on a subset of the AFLW images to choose a threshold T 1 at 99% recall rate. Then we densely scan all background images with the 2- stage cascade. All detection windows with confidence score larger than T 1 become the negative training samples for the 24-net."
Basically, if you wish to have higher recall but do not care about precision, then the lower the better!
hi, I'm training the 6 nets using caffe. I find that my nets converge very fast. The 12-net, for example, it starts at loss: 0.69 and acc:0.68, however after 1000 iteration, the loss gets 0.01 and the acc gets 0.994, then they only changes in very small range. Does it mean that the net meet overfitting problem? I test my net, I found that it doesn't filter non-face region as much as expected. How does it happen? Thanks