Closed haoranD closed 4 years ago
Can you provide more details, like when does it crash?
Can you provide more details, like when does it crash?
Much appreciated for your reply.
Run the Feature_Extractor.py ---> main() function ---> train_loader = VideoIterTrain ---> class VideoIterTrain(data.Dataset) ---> self.video_clips = VideoClips() ---> VideoClips() function initialization ---> _compute_frame_pts() ---> located at for batch in dl:
I didn't get any errors but my pc just crashed after 5 minutes. I monitored the GPU, CPU, Memory. I found this step didn't use GPU too much. But cost massive memory(at least 48GB memory and need more or PC crashing).
Thanks and have a nice day.
That's probably has to do with torchvision's VideoClip class.
The quickest solution I suggest is to rewrite data_loader so that it loads one video at at time, instead of loading the whole dataset which is what I've done.
That's probably has to do with torchvision's VideoClip class.
The quickest solution I suggest is to rewrite data_loader so that it loads one video at at time, instead of loading the whole dataset which is what I've done.
Perfect! I will try it and come back. Thx
Ok so just in case you are slow like me (and my Pc is would seem) Here is code to add to data_loader (around line 30) to get it to load one video at a time.
with tqdm(total=len(self.video_list[1:])+1,desc=' total % of videos loaded') as pbar1: for video_list_used in self.video_list[1:]: #length of load?) pbar1.update(1) video_clips_out = VideoClips(video_paths=[video_list_used], clip_length_in_frames=self.total_clip_length_in_frames, frames_between_clips=self.total_clip_length_in_frames) self.video_clips.clips.append(video_clips_out.clips[0]) self.video_clips.cumulative_sizes.append(video_clips_out.cumulative_sizes[0]) self.video_clips.resampling_idxs.append(video_clips_out.resampling_idxs[0]) self.video_clips.video_fps.append(video_clips_out.video_fps[0]) self.video_clips.video_paths.append(video_clips_out.video_paths[0]) self.video_clips.video_pts.append(video_clips_out.video_pts[0])
You dont have to add the with tqdm(total=len(self.video_list[1:])+1,desc=' total % of videos loaded') as pbar1:
line as I was just plaing about with tqdm to gave a "tatal count". Without this you will still have a tqdm bar for ever video loaded, inless you want to mess with the code in torchvision's VideoClip class.
Hope this helps some one.
Ok so just in case you are slow like me (and my Pc is would seem) Here is code to add to data_loader (around line 30) to get it to load one video at a time.
with tqdm(total=len(self.video_list[1:])+1,desc=' total % of videos loaded') as pbar1: for video_list_used in self.video_list[1:]: #length of load?) pbar1.update(1) video_clips_out = VideoClips(video_paths=[video_list_used], clip_length_in_frames=self.total_clip_length_in_frames, frames_between_clips=self.total_clip_length_in_frames) self.video_clips.clips.append(video_clips_out.clips[0]) self.video_clips.cumulative_sizes.append(video_clips_out.cumulative_sizes[0]) self.video_clips.resampling_idxs.append(video_clips_out.resampling_idxs[0]) self.video_clips.video_fps.append(video_clips_out.video_fps[0]) self.video_clips.video_paths.append(video_clips_out.video_paths[0]) self.video_clips.video_pts.append(video_clips_out.video_pts[0])
You dont have to add thewith tqdm(total=len(self.video_list[1:])+1,desc=' total % of videos loaded') as pbar1:
line as I was just plaing about with tqdm to gave a "tatal count". Without this you will still have a tqdm bar for ever video loaded, inless you want to mess with the code in torchvision's VideoClip class. Hope this helps some one.
Hey, sorry I screwed up.
The cumulative_sizes update I put here does not add the running total meaning, i.e. it will just list the size of that video clips.
you need to add
self.video_clips.cumulative_sizes.append(self.video_clips.cumulative_sizes[-1]+video_clips_out.cumulative_sizes[0])
This should then keep the running total.
Hope that is now alright, please let me know if anyone spots any other problems.
Thanks
Hi, thank you for your excellent works.
I am trying to extract the features myself. But I found that when I ran the feature_extractor, it cost me all the memory(32GB) and my pc crashed.
Can I ask if I should make any changes or can you please give me any suggestion?