chaoyuaw / pytorch-coviar

Compressed Video Action Recognition
https://www.cs.utexas.edu/~cywu/projects/coviar/
GNU Lesser General Public License v2.1
502 stars 126 forks source link

raise ValueError("empty range for randrange() (%d,%d, %d)" % (istart, istop, width)) #20

Closed Tylerjoe closed 5 years ago

Tylerjoe commented 6 years ago

I met a problem in transform.py,could you please give me some advices. Thanks!

Traceback (most recent call last): File "train.py", line 275, in main() File "train.py", line 104, in main train(train_loader, model, criterion, optimizer, epoch, cur_lr) File "train.py", line 134, in train for i, (input, target) in enumerate(train_loader): File "/root/anaconda3/envs/caffe-tf/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 322, in next return self._process_next_batch(batch) File "/root/anaconda3/envs/caffe-tf/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 357, in _process_next_batch raise batch.exc_type(batch.exc_msg) ValueError: Traceback (most recent call last): File "/root/anaconda3/envs/caffe-tf/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 106, in _worker_loop samples = collate_fn([dataset[i] for i in batch_indices]) File "/root/anaconda3/envs/caffe-tf/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 106, in samples = collate_fn([dataset[i] for i in batch_indices]) File "/data/code/project/pytorch-coviar/dataset.py", line 160, in getitem frames = self._transform(frames) File "/root/anaconda3/envs/caffe-tf/lib/python3.6/site-packages/torchvision/transforms/transforms.py", line 49, in call img = t(img) File "/data/code/project/pytorch-coviar/transforms.py", line 124, in call crop_w, crop_h, offset_w, offset_h = self._sample_crop_size(im_size) File "/data/code/project/pytorch-coviar/transforms.py", line 153, in _sample_crop_size w_offset = random.randint(0, image_w - crop_pair[0]) File "/root/anaconda3/envs/caffe-tf/lib/python3.6/random.py", line 221, in randint return self.randrange(a, b+1) File "/root/anaconda3/envs/caffe-tf/lib/python3.6/random.py", line 199, in randrange raise ValueError("empty range for randrange() (%d,%d, %d)" % (istart, istop, width)) ValueError: empty range for randrange() (0,-1, -1)

terminate called after throwing an instance of 'at::Error' what(): CUDA error (29): driver shutting down (check_status at /pytorch/aten/src/ATen/cuda/detail/CUDAHooks.cpp:36) frame #0: at::detail::CUDAStream_free(CUDAStreamInternals&) + 0x50 (0x7fe59246aa50 in /root/anaconda3/envs/caffe-tf/lib/python3.6/site-packages/torch/lib/libcaffe2.so) frame #1: THCStream_free + 0x13 (0x7fe56f4d0953 in /root/anaconda3/envs/caffe-tf/lib/python3.6/site-packages/torch/lib/libcaffe2_gpu.so) frame #2: std::_Rb_tree<std::shared_ptr, std::shared_ptr, std::_Identity<std::shared_ptr >, std::less<std::shared_ptr >, std::allocator<std::shared_ptr > >::_M_erase(std::_Rb_tree_node<std::shared_ptr >) + 0x8e (0x7fe56f4c1fbe in /root/anaconda3/envs/caffe-tf/lib/python3.6/site-packages/torch/lib/libcaffe2_gpu.so) frame #3: + 0xd1ca71 (0x7fe56f4c5a71 in /root/anaconda3/envs/caffe-tf/lib/python3.6/site-packages/torch/lib/libcaffe2_gpu.so) frame #4: + 0xd1caa0 (0x7fe56f4c5aa0 in /root/anaconda3/envs/caffe-tf/lib/python3.6/site-packages/torch/lib/libcaffe2_gpu.so) frame #5: + 0x38e69 (0x7fe5b237ae69 in /lib64/libc.so.6) frame #6: + 0x38eb5 (0x7fe5b237aeb5 in /lib64/libc.so.6) frame #7: __libc_start_main + 0xfc (0x7fe5b2363b1c in /lib64/libc.so.6)

Tylerjoe commented 6 years ago

is there some thing wrong in transform.py . in the funtion of _sample_crop_size, the crop size is larger than image size. should we change : crop_sizes = [int(base_size * x) for x in self.scales] crop_h = [self.input_size[1] if abs(x - self.input_size[1]) < 3 else x for x in crop_sizes] crop_w = [self.input_size[0] if abs(x - self.input_size[0]) < 3 else x for x in crop_sizes]

yunfanLu commented 6 years ago

I meet the same problem.

chaoyuaw commented 6 years ago

Yes, I think if your image is smaller than the crop size, then you might want to change it. Are you using UCF101 and HMDB prepared by "getting_started.md", or other datasets? If it's the former, there could be something wrong in the data preparation step, since reencode.sh should resize all videos to 340x256, and this shouldn't happen.

lmnhsp commented 5 years ago

have you solve this problem now ? i have the same problem with you,I've already resize all videos to 340*256 by using reencode.sh, image

lmnhsp commented 5 years ago

i'm looking forward to your reply

wujunyi627 commented 5 years ago

maybe you use "transforms.Resize(a,b)and transforms.RandomCrop(c,d) ",but a<c or b<d.

Tylerjoe commented 5 years ago

i'm looking forward to your reply

this is the problem of the frame size, you should resize you image, or you should change like this: crop_sizes = [int(base_size * x) for x in self.scales] crop_h = [self.input_size[1] if abs(x - self.input_size[1]) < 3 else x for x in crop_sizes] crop_w = [self.input_size[0] if abs(x - self.input_size[0]) < 3 else x for x in crop_sizes]