foolwood / DCFNet

DCFNet: Discriminant Correlation Filters Network for Visual Tracking
MIT License
215 stars 66 forks source link

The exact configurations for retraining the released model #6

Closed gongbudaizhe closed 7 years ago

gongbudaizhe commented 7 years ago

Hi,

I noticed that the current train_DCFNet.m script trains networkType = 12 while the released model (DCFNet-net-7-125-2.mat) use networkType = 7. So my question is how to retrain this model?

For now, the meta data of the released model is:

  1. Can I just set these values in the train_DCFNet.m, then I am good to go ?
  2. What does the loss type mean ?

Thanks

foolwood commented 7 years ago

Just set networkType = 7, you can reproduce the training result. The other parameter remain unchanged as train_DCFNet.m

I want make loss type as a superior feature for DCFNet. But by now, I have no time in this part. You can find some inspiration in develop logger.

I had intended to use a regularized hinge loss. Since the network can be trained end-to-end. I can use a more powerful loss function rather than a simple L2/L1 loss. (but I have no time and no GPU, you can try it yourself.)

gongbudaizhe commented 7 years ago

Thanks for your detailed explanation.

Two more questions about the training dynamics:

  1. How many iterations does it take to make the network converge?
  2. How will the tracking performance change in different epoches ?
foolwood commented 7 years ago

The loss objective in training data is converged after 15-20epoch(about 3-4 hours). The objective of val jumps +-0.3.

I doesn't try to use each epoches.(Since GPUGPUGPUGPUGPU limited) I guess the preformance gap will be in an acceptable range(+-0.5% I guess).

I do think such a shallow+Dense(in resolution not DenseNet) network design is very suitable(no overfitting problem) for fast visual tracking.