Open Rookielike opened 6 years ago
One video buffer in a batch. You can estimate the epochs based on the total number of processed video buffers.
The loss looks reasonable. The result depends on the specific dataset. I am not sure about your case. You can try different hyper-parameters.
On Sep 10, 2018, at 00:27, Rookielike notifications@github.com<mailto:notifications@github.com> wrote:
hi,thanks for providing your code.When i training the model on my own dataset,i just see the number of iteration and i don't know the number of epoches.So,how can i know it? when i training on my own dataset,my loss picture just like the picture below. is it right? i got Map 16.43% on my own dataset which including seven gestures?i think it is two low,how can i make some changes to improve the results? thanks for your reading [default]https://user-images.githubusercontent.com/28975635/45282585-f0906f80-b50d-11e8-8e8c-74af3470d5e6.PNG
— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHubhttps://github.com/VisionLearningGroup/R-C3D/issues/41, or mute the threadhttps://github.com/notifications/unsubscribe-auth/AFOa_zbaaz2HXIlc2EuEbCdf6SzKd4-oks5uZhRHgaJpZM4Wg2T5.
@huijuan88 thanks for your reply
hi,thanks for providing your code.When i training the model on my own dataset,i just see the number of iteration and i don't know the number of epoches.So,how can i know it? when i training on my own dataset,my loss picture just like the picture below. is it right? i got Map 16.43% on my own dataset which including seven gestures?i think it is two low,how can i make some changes to improve the results? thanks for your reading