zhoubolei / TRN-pytorch

Temporal Relation Networks
http://relation.csail.mit.edu/
Other
786 stars 191 forks source link

Error when testing #33

Open sparklingyueran opened 6 years ago

sparklingyueran commented 6 years ago

I tried to download sample data. However, there is no file named bolei_juggling.mp4 but just one file named juggling.mp4 after downloading.

sparklingyueran commented 6 years ago

I checked the downloaded sample_data, and found 'jugglinf.mp4' and a folder named 'juggling_frames'.

When I tried your test code directly on the mp4 file. It said that 'Video must have at least 8 frames'.

When I tried test code on the frame folder, it worked.

When I tried test code on a new frame folder that I created, it said the size of frames could not fit.

Why the code just can work on a folder for frames? How can I apply it to an mp4 file directly or even on different-sized frames?

The following part is my output.

~/gitfile/TRN-pytorch]$ CUDA_VISIBLE_DEVICES=1 python test_video.py --arch InceptionV4 --dataset moments --weight pretrain/TRN_moments_RGB_InceptionV3_TRNmultiscale_segment8_best.pth.tar --frame_folder sample_data/juggling.mp4 --rendered_output sample_data/predicted_video.mp4 ('Multi-Scale Temporal Relation Network Module in use', ['8-frame relation', '7-frame relation', '6-frame relation', '5-frame relation', '4-frame relation', '3-frame relation', '2-frame relation']) Freezing BatchNorm2D except the first one. Loading frames in sample_data/juggling.mp4 Traceback (most recent call last): File "test_video.py", line 130, in frames = load_frames(frame_paths) File "test_video.py", line 57, in load_frames raise ValueError('Video must have at least {} frames'.format(num_frames)) ValueError: Video must have at least 8 frames

~/gitfile/TRN-pytorch]$ CUDA_VISIBLE_DEVICES=1 python test_video.py --arch InceptionV4 --dataset moments --weight pretrain/TRN_moments_RGB_InceptionV3_TRNmultiscale_segment8_best.pth.tar --frame_folder sample_data/juggling_frames/ --rendered_output sample_data/predicted_video.mp4 ('Multi-Scale Temporal Relation Network Module in use', ['8-frame relation', '7-frame relation', '6-frame relation', '5-frame relation', '4-frame relation', '3-frame relation', '2-frame relation']) Freezing BatchNorm2D except the first one. Loading frames in sample_data/juggling_frames/ RESULT ON sample_data/juggling_frames/ 1.000 -> juggling 0.000 -> catching 0.000 -> balancing 0.000 -> performing 0.000 -> spinning [MoviePy] >>>> Building video sample_data/predicted_video.mp4 [MoviePy] Writing video sample_data/predicted_video.mp4 89%|███████████████████████████████████████ | 8/9 [00:00<00:00, 528.19it/s] [MoviePy] Done. [MoviePy] >>>> Video ready: sample_data/predicted_video.mp4

~/gitfile/TRN-pytorch]$ CUDA_VISIBLE_DEVICES=1 python test_video.py --arch InceptionV4 --dataset moments --weight pretrain/TRN_moments_RGB_InceptionV3_TRNmultiscale_segment8_best.pth.tar --frame_folder sample_data/frame_test/ --rendered_output sample_data/predicted_video.mp4 ('Multi-Scale Temporal Relation Network Module in use', ['8-frame relation', '7-frame relation', '6-frame relation', '5-frame relation', '4-frame relation', '3-frame relation', '2-frame relation']) Freezing BatchNorm2D except the first one. Loading frames in sample_data/frame_test/ Traceback (most recent call last): File "test_video.py", line 140, in logits = net(input_var) File "/home/wangwq/anaconda3/envs/python2/lib/python2.7/site-packages/torch/nn/modules/module.py", line 491, in call result = self.forward(*input, **kwargs) File "/home/pany/gitfile/TRN-pytorch/models.py", line 228, in forward base_out = base_out.view((-1, self.num_segments) + base_out.size()[1:]) RuntimeError: invalid argument 2: size '[-1 x 8 x 256]' is invalid for input with 12800 elements at /opt/conda/conda-bld/pytorch_1524577177097/work/aten/src/TH/THStorage.c:37

wishinger-li commented 5 years ago

I met the same problem , any advice ?

holopekochan commented 5 years ago

looks like the files are under here. http://relation.csail.mit.edu/data/

And for loading video, Try to read the code test_video.py under # Obtain video frames, you will understand it. use args video_file

AndyStrongBoy commented 5 years ago

hi, when I run 'test_models.py', I met similar error: "RuntimeError: invalid argument 2: size '[-1 x 8 x 256]' is invalid for input with 768 elements at /opt/conda/conda-bld/pytorch_1518244421288/work/torch/lib/TH/THStorage.c:37" I have checked my paramaters of 'num_segments' both in the trained model and test script, both of them are 8, but still triggers this error. Do you any idea why this error happens? I will apprecriate it if answering!

AndyStrongBoy commented 5 years ago

hi, I solve the problem by change line 143 code of 'test_models.py' "input_var = torch.autograd.Variable(data.view(-1, length, data.size(2), data.size(3)), volatile=True)" into "input_var = torch.autograd.Variable(data, volatile=True)", then it works fine, didn't occur above error.

The reason I change this code is that line 223 code of 'main.py' "input_var = torch.autograd.Variable(input, volatile=True)" don't change the input data's dimension

I don't know if it's a bug or something else reason, because someone run the script 'test_models.py' well. Do you know why this happens? Thanks!

liuyanyu00 commented 5 years ago

It works,thx

jin03041209 commented 3 years ago

Hello, sorry to interrupt, when I run"python test_video.py --arch InceptionV3 --dataset moment --weight pretrain/TRN_moments_RGB_InceptionV3_TRNmultiscale_segment8_best.pth.tar --frame_folder sample_data/juggling.mp4 --rendered_datapredict sample_data/juggling.mp4" encountered "AttributeError:module 'model_zoo' has no attribute 'InceptionV3'",I see that you have changed "InceptionV3" to "InceptionV4", how did you do this and is it convenient to provide the source code? My email: 15227170973@163.com。Thanks!