ArrowLuo / CLIP4Clip

An official implementation for "CLIP4Clip: An Empirical Study of CLIP for End to End Video Clip Retrieval"
https://arxiv.org/abs/2104.08860
MIT License
880 stars 124 forks source link

VIDIOC_REQBUFS: Inappropriate ioctl for device #77

Closed wangyu0303 closed 2 years ago

wangyu0303 commented 2 years ago

Hello, When I finished train, how do I test or eval it? I got this issue. Can you Help me?

VIDIOC_REQBUFS: Inappropriate ioctl for device Traceback (most recent call last): File "main_task_retrieval.py", line 583, in main() File "main_task_retrieval.py", line 580, in main eval_epoch(args, model, test_dataloader, device, n_gpu) File "main_task_retrieval.py", line 359, in eval_epoch for bid, batch in enumerate(test_dataloader): File "/home/ubuntu/studentAssign/wangyu/anaconda3/envs/ct/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 435, in next data = self._next_data() File "/home/ubuntu/studentAssign/wangyu/anaconda3/envs/ct/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 475, in _next_data data = self._dataset_fetcher.fetch(index) # may raise StopIteration File "/home/ubuntu/studentAssign/wangyu/anaconda3/envs/ct/lib/python3.6/site-packages/torch/utils/data/_utils/fetch.py", line 44, in fetch data = [self.dataset[idx] for idx in possibly_batched_index] File "/home/ubuntu/studentAssign/wangyu/anaconda3/envs/ct/lib/python3.6/site-packages/torch/utils/data/_utils/fetch.py", line 44, in data = [self.dataset[idx] for idx in possibly_batched_index] File "/home/ubuntu/studentAssign/wangyu/CLIP4Clip-master/dataloaders/dataloader_msrvtt_retrieval.py", line 136, in getitem video, video_mask = self._get_rawvideo(choice_video_ids) File "/home/ubuntu/studentAssign/wangyu/CLIP4Clip-master/dataloaders/dataloader_msrvtt_retrieval.py", line 98, in _get_rawvideo raw_video_data = self.rawVideoExtractor.get_video_data(video_path) File "/home/ubuntu/studentAssign/wangyu/CLIP4Clip-master/dataloaders/rawvideo_util.py", line 76, in get_video_data image_input = self.video_to_tensor(video_path, self.transform, sample_fp=self.framerate, start_time=start_time, end_time=end_time) File "/home/ubuntu/studentAssign/wangyu/CLIP4Clip-master/dataloaders/rawvideo_util.py", line 36, in video_to_tensor total_duration = (frameCount + fps - 1) // fps ZeroDivisionError: integer division or modulo by zero Traceback (most recent call last): File "/home/ubuntu/studentAssign/wangyu/anaconda3/envs/ct/lib/python3.6/runpy.py", line 193, in _run_module_as_main "main", mod_spec) File "/home/ubuntu/studentAssign/wangyu/anaconda3/envs/ct/lib/python3.6/runpy.py", line 85, in _run_code exec(code, run_globals) File "/home/ubuntu/studentAssign/wangyu/anaconda3/envs/ct/lib/python3.6/site-packages/torch/distributed/launch.py", line 260, in main() File "/home/ubuntu/studentAssign/wangyu/anaconda3/envs/ct/lib/python3.6/site-packages/torch/distributed/launch.py", line 256, in main cmd=cmd) subprocess.CalledProcessError: Command '['/home/ubuntu/studentAssign/wangyu/anaconda3/envs/ct/bin/python', '-u', 'main_task_retrieval.py', '--local_rank=0', '--do_eval', '--num_thread_reader=0', '--epochs=1', '--batch_size=1', '--n_display=50', '--train_csv', './MSRVTT/MSRVTT_train.9k.csv', '--val_csv', './MSRVTT/MSRVTT_JSFUSION_test.csv', '--data_path', './MSRVTT/MSRVTT_data.json', '--features_path', './all_videos', '--output_dir', './ckpts/ckpt_msrvtt_retrieval_looseType', '--lr', '1e-4', '--max_words', '32', '--max_frames', '12', '--batch_size_val', '16', '--datatype', 'msrvtt', '--expand_msrvtt_sentences', '--feature_framerate', '1', '--coef_lr', '1e-3', '--freeze_layer_num', '0', '--slice_framepos', '2', '--loose_type', '--linear_patch', '2d', '--sim_header', 'meanP', '--pretrained_clip_name', 'ViT-B/32']' returned non-zero exit status 1.

I will appreciate your help with this situation. Thank you in advance.

ArrowLuo commented 2 years ago

Hi @wangyu0303, I guess something is wrong with your videos from ZeroDivisionError: integer division or modulo by zero caused by total_duration = (frameCount + fps - 1) // fps.

wangyu0303 commented 2 years ago

Thank you for your reply. I owe you big time.