VisionLearningGroup / R-C3D

code for R-C3D
MIT License
254 stars 94 forks source link

Total training time on THUMOS14 #39

Open ivyvideo opened 6 years ago

ivyvideo commented 6 years ago

Hi, I am running the code and find that it takes about two hours every 1000 iterations on Tesla M40, and I noticed that the default iteration is 60k totally...so it may cost so long time to finish training... What is the reason for that? And what is your original training time? Thanks!

huijuan88 commented 6 years ago

If you resize the frames to be smaller, it would run faster.

It also has a relationship with your machine. I am ~3 seconds per iteration.

On Aug 28, 2018, at 05:45, ivyvideo notifications@github.com<mailto:notifications@github.com> wrote:

Hi, I am running the code and find that it takes about two hours every 1000 iterations on , and I noticed that the default iteration is 60k totally...so it may cost so long time to finish training... What is the reason for that? And what is your origin training time? Thanks!

— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHubhttps://github.com/VisionLearningGroup/R-C3D/issues/39, or mute the threadhttps://github.com/notifications/unsubscribe-auth/AFOa_3Y3stiO76Vx_xuaGoIjpOlqOuMNks5uVRFWgaJpZM4WPRJX.

ivyvideo commented 6 years ago

Is it related to the setting: cfg.TRAIN.VIDEO_BATCH = 1? Maybe I should set this parameter larger? Like 32 or 64?

ivyvideo commented 6 years ago

Sorry but I have another question... as shown below in the picture(part of minibatch.py), the parameter "fg_rois_per_video" is not used anymore after being assigned a value... so the network deals with 512 frames in only one video segment during forward process? And the batch_size is not used, either. Am I right? image

huijuan88 commented 6 years ago

Currently only one video in a batch.

On Aug 29, 2018, at 04:05, ivyvideo notifications@github.com<mailto:notifications@github.com> wrote:

Sorry but I have another question... as shown below in the picture(part of minibatch.py), the parameter "fg_rois_per_video" is not used anymore after being assigned a value... so the network deals with 512 frames in only one video segment during forward process? And the batch_size is not used, either. Am I right? [image]https://user-images.githubusercontent.com/42105211/44774273-4592c400-aba5-11e8-8898-ea82b849af8a.png

— You are receiving this because you commented. Reply to this email directly, view it on GitHubhttps://github.com/VisionLearningGroup/R-C3D/issues/39#issuecomment-416863353, or mute the threadhttps://github.com/notifications/unsubscribe-auth/AFOa_7cM4q9c4h44iXnhRitJUZ4NkPUnks5uVksvgaJpZM4WPRJX.

ivyvideo commented 6 years ago

Thank you so much for the kind reply.

ivyvideo commented 6 years ago

@huijuan88 I have one more question...have you tested setting the "video_batch" more than 1?

huijuan88 commented 6 years ago

No. That part is not implemented.

On Aug 30, 2018, at 18:30, ivyvideo notifications@github.com<mailto:notifications@github.com> wrote:

@huijuan88https://github.com/huijuan88 I have one more question...have you tested setting the "video_batch" more than 1?

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHubhttps://github.com/VisionLearningGroup/R-C3D/issues/39#issuecomment-417520728, or mute the threadhttps://github.com/notifications/unsubscribe-auth/AFOa_613oU-RQmzPC23Cf42c1zlVJgCMks5uWJHLgaJpZM4WPRJX.

ivyvideo commented 6 years ago

OK, your help is greatly appreciated~~