amoudgl / pygoturn

PyTorch implementation of GOTURN object tracker: Learning to Track at 100 FPS with Deep Regression Networks (ECCV 2016)
MIT License
135 stars 36 forks source link

get unstable results when evaluate the model #14

Closed LiangXu123 closed 5 years ago

LiangXu123 commented 6 years ago

Since we have nn.Dropout() layer in our model,during test,we should set the model to evaluation status by self.model.eval(), but in your code, test.py there is no such setting,so the Dropout layer is still working during test,thus we will get unstable output even from the same model and the same input.

But the worst part is, when we do not set self.model.eval(),we can get unstable but relativly correct output, but after you do set self.model.eval(),you will get even worse result,and in most of the time ,the result is totally wrong after a dozen of frames since the first frame is initialized by the groundtruth box.

I don't get it, and when I check the original GOTURN in caffe,I found the same setting,which means the train and test phase are using the same .prototxt file,is equal to set self.model.train() even in test time! any ideas? thank you in advance.

amoudgl commented 6 years ago

Hi, I am still working on the inference part from the model. test.py may need some revisions. As of now, train.py is ready, just need to train for enough iterations to get a good working model.

In the original source code, they are using the same model but they set do_train to false in the test code here. So, essentially they are disabling the dropouts while testing I believe, which is equivalent to pytorch model.eval().

LiangXu123 commented 6 years ago

sure, since set model.eval() is normal standard procedure in most case,but so far the result is totally wrong in evaluation phase,which make no sense,because the Dropout will handle the scale problem itself.

amoudgl commented 5 years ago

Hi, I have fixed all these issues now. Please have a look at the updated README.