Closed wxywb closed 8 years ago
Hi, 170000 model is trained by the sovler.prototxt in this repository. There aren't training tricks in my experiment. In my opinion, more training sample will help your training. About 300000 training samples are used in my training.
Thanks for reply, I found that in the original patch-generation code, there are 20288 patches cropped from 91 images dataset, and you use data augmentation to get 300000 patches from these 91 images,right?
Yes
在 2016年10月10日,18:57,Xianyu Wang notifications@github.com<mailto:notifications@github.com> 写道:
Thanks for reply, I found that in the original patch-generation code, there are 20288 patches cropped from 91 images dataset, and you use data augmentation to get 300000 patches from these 91 images,right?
― You are receiving this because you commented. Reply to this email directly, view it on GitHubhttps://github.com/huangzehao/caffe-vdsr/issues/15#issuecomment-252587019, or mute the threadhttps://github.com/notifications/unsubscribe-auth/ANB22gkR8g6qA7nqTImS3GfyZdbxoSIzks5qyhopgaJpZM4KSMv-.
At first thanks for your work. I just run the code with caffe and get a iter_200000 model, and test it by your matlab code.But it looks doesn't have the performance compared to your 170000 iterations counterpart.Is there some training tricks I need to do during training progress?(like decrease the learning rate or other things). ps:the test loss decreases quickly and stucked at about average 0.185 after 30000 iterations.