Closed nelaturuharsha closed 4 years ago
I updated the code before but have not added the new pre-trained model with much better performance.
Could you try the new code with new eval.py and model for bicubic? When you do testing, you should first prepare your LR frames in your test set directory. With Vid4 dataset as an example, you will have "data/Vid4/city/LR_bicubic", "data/Vid4/foliage/LR_bicubic"...
If you have any questions or issues when running the code, please let me know.
Same problem. Following your kind advice, I update the codes with the newest version which you just pushed, however another problem occurs:
Traceback (most recent call last): File "eval.py", line 117, in <module> output, _ = model(lr) File "/data/software/anaconda3/envs/py36_torch031/lib/python3.6/site-packages/torch/nn/modules/module.py", line 477, in __call__ result = self.forward(*input, **kwargs) File "/data/software/anaconda3/envs/py36_torch031/lib/python3.6/site-packages/torch/nn/parallel/data_parallel.py", line 123, in forward outputs = self.parallel_apply(replicas, inputs, kwargs) File "/data/software/anaconda3/envs/py36_torch031/lib/python3.6/site-packages/torch/nn/parallel/data_parallel.py", line 133, in parallel_apply return parallel_apply(replicas, inputs, kwargs, self.device_ids[:len(replicas)]) File "/data/software/anaconda3/envs/py36_torch031/lib/python3.6/site-packages/torch/nn/parallel/parallel_apply.py", line 77, in parallel_apply raise output File "/data/software/anaconda3/envs/py36_torch031/lib/python3.6/site-packages/torch/nn/parallel/parallel_apply.py", line 53, in _worker output = module(*input, **kwargs) File "/data/software/anaconda3/envs/py36_torch031/lib/python3.6/site-packages/torch/nn/modules/module.py", line 477, in __call__ result = self.forward(*input, **kwargs) File "/data/guotaian/vsr/TDAN-VSR-master/model.py", line 231, in forward lrs = self.align(out, x_center) # motion alignments File "/data/guotaian/vsr/TDAN-VSR-master/model.py", line 199, in align fea = (self.dconv_1(fea, offset1)) File "/data/software/anaconda3/envs/py36_torch031/lib/python3.6/site-packages/torch/nn/modules/module.py", line 477, in __call__ result = self.forward(*input, **kwargs) File "/data/guotaian/vsr/TDAN-VSR-master/modules/deform_conv.py", line 43, in forward self.num_deformable_groups) File "/data/guotaian/vsr/TDAN-VSR-master/functions/deform_conv.py", line 23, in conv_offset2d return f(input, offset, weight) File "/data/guotaian/vsr/TDAN-VSR-master/functions/deform_conv.py", line 54, in forward self.dilation[0], self.deformable_groups) File "/data/software/anaconda3/envs/py36_torch031/lib/python3.6/site-packages/torch/utils/ffi/__init__.py", line 202, in safe_call result = torch._C._safe_call(*args, **kwargs) torch.FatalError: invalid argument 5: 4D weight tensor (nOutputPlane,nInputPlane,kH,kW) expected, but got: (null) at /mnt/ssd0/project/ytian21/videoSR/DCN/src/deform_conv_cuda.c:15
So how could this problem be solved? Thank you so much!
So I got it to run! It SR-ed the input images by 4x. I believe adding these steps would be useful for people working forward
conda install pytorch==0.3.1 torchvision cuda90 -c pytorch
Verify that torch.version.cuda returns 9.0 (As PT 0.3.1 doesn't support CUDA10 etc., and there is a hard requirement of devices to have CUDA9+)
Subsequent to this -- the folder directory to be maintained should be test_folder/input/LR_bicubic/ as you had mentioned.
At line 111 in eval.py
frames_lr[idx, :, :, :] = io.imread(os.path.join(LR, ims[k]))
I got this error
ValueError: Could not find a format to read the specified file in single-image mode
This was fixed by manually typing out the path in place of "LR". Just wanted to make a note of this. Seems to be a common io error.
Thank you so much for the prompt response.
Regards, Sree Harsha
I am happy to learn that you can run the code successfully. The TDAN uses much fewer memory than our ZoomingSlomo. But I have not done a test to find the exact upper bound for it. But you can always use frame splitting to divide frames into smaller patches and do testing for large inputs. Please check chop_forward() function in solver.py.
Thank you for the prompt replies, I am currently trying out chop_forward and so far its not OOM. I probably should bring this up in Zooming Slo-Mo repo, but just in case its okay, would it possible to adapt the same strategy for it as well?
Yes, it can be applied to Zooming Slo-Mo as well.
Hello, I'm facing this error while using pytorch 0.3.1 installed via pip. I am on a RTX 2080Ti, CUDA 10, Python 3.6, Ubuntu 18.04.
Traceback (most recent call last): File "eval.py", line 41, in
model = torch.load(model_path)
File "/home/user/anaconda3/envs/enlighten/lib/python3.6/site-packages/torch/serialization.py", line 267, in load
return _load(f, map_location, pickle_module)
File "/home/user/anaconda3/envs/enlighten/lib/python3.6/site-packages/torch/serialization.py", line 420, in _load
result = unpickler.load()
AttributeError: Can't get attribute 'DSW' on <module 'model' from '/home/user/TDAN-VSR/model.py'>
I face this error when I run: python eval.py -t test_example/
Also when I run bash make.sh there is no output it simply exits.
Would be grateful for the help! Thank you, Sree Harsha