TruongKhang / cds-mvsnet

[ICLR2022] Curvature-guided dynamic scale networks for Multi-view Stereo
118 stars 6 forks source link

error: RuntimeError: Caught RuntimeError in replica 0 on device 0. RuntimeError: The size of tensor a (5120) must match the size of tensor b (80) at non-singleton dimension 3 #12

Closed divingwolf closed 2 years ago

divingwolf commented 2 years ago

python train.py --config configs/config_dtu.json ........ (res): Conv2d(8, 1, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) ) ) Epoch 1 temperature 1.0 Traceback (most recent call last): File "train.py", line 83, in main(config) File "train.py", line 64, in main trainer.train() File "/home/tianle/cds-mvsnet/base/base_trainer.py", line 63, in train result = self._train_epoch(epoch) File "/home/tianle/cds-mvsnet/trainer/trainer.py", line 78, in _train_epoch outputs = self.model(imgs, cam_params, depth_values, gt_depths=depth_gt_ms, temperature=temperature) File "/data/tianle/anaconda3/envs/cdsmvs/lib/python3.8/site-packages/torch/nn/modules/module.py", line 889, in _call_impl result = self.forward(*input, kwargs) File "/data/tianle/anaconda3/envs/cdsmvs/lib/python3.8/site-packages/torch/nn/parallel/data_parallel.py", line 167, in forward outputs = self.parallel_apply(replicas, inputs, kwargs) File "/data/tianle/anaconda3/envs/cdsmvs/lib/python3.8/site-packages/torch/nn/parallel/data_parallel.py", line 177, in parallel_apply return parallel_apply(replicas, inputs, kwargs, self.device_ids[:len(replicas)]) File "/data/tianle/anaconda3/envs/cdsmvs/lib/python3.8/site-packages/torch/nn/parallel/parallel_apply.py", line 86, in parallel_apply output.reraise() File "/data/tianle/anaconda3/envs/cdsmvs/lib/python3.8/site-packages/torch/_utils.py", line 429, in reraise raise self.exc_type(msg) RuntimeError: Caught RuntimeError in replica 0 on device 0. Original Traceback (most recent call last): File "/data/tianle/anaconda3/envs/cdsmvs/lib/python3.8/site-packages/torch/nn/parallel/parallel_apply.py", line 61, in _worker output = module(*input, *kwargs) File "/data/tianle/anaconda3/envs/cdsmvs/lib/python3.8/site-packages/torch/nn/modules/module.py", line 889, in _call_impl result = self.forward(input, kwargs) File "/home/tianle/cds-mvsnet/models/model.py", line 194, in forward outputs_stage = self.stage_net(features_stage, proj_matrices_stage, File "/data/tianle/anaconda3/envs/cdsmvs/lib/python3.8/site-packages/torch/nn/modules/module.py", line 889, in _call_impl result = self.forward(*input, *kwargs) File "/home/tianle/cds-mvsnet/models/model.py", line 64, in forward gt_warped_vol = homo_warping_3D(src_fea, src_proj_new, ref_proj_new, gt_depth) File "/home/tianle/cds-mvsnet/models/utils/warping.py", line 91, in homo_warping_3D rot_depth_xyz = rot_xyz.unsqueeze(2).repeat(1, 1, num_depth, 1) depth_values.view(batch, 1, num_depth, RuntimeError: The size of tensor a (5120) must match the size of tensor b (80) at non-singleton dimension 3

TruongKhang commented 2 years ago

Hello @jinshuitiaofan, did you modify the code or the config file? I have no problem running my code on my machine!

divingwolf commented 2 years ago

I change the config_dtu.json as /data/tianle/DTUdata and dtu_yao.py : cmask_filename_hr = os.path.join(self.datapath, 'Depth/{}/depthvisual{:0>4}.png'.format(scan+'_train', vid)) depth_filename_hr = os.path.join(self.datapath, 'Depth/{}/depthmap{:0>4}.pfm'.format(scan+'_train', vid)) before I change it there is an error no file in dir.

divingwolf commented 2 years ago

Hello @jinshuitiaofan, did you modify the code or the config file? I have no problem running my code on my machine!

the dataset what I download is like --Depth --scan1_train ... not your dir --Depths_raw --scan .....

TruongKhang commented 2 years ago

oh, I think you should use Depths_raw with scan folder. When you use Depth folder, its scan_train folder included preprocessed depths which are already resized. In my code dtu_yao.py, I implemented downsampling from the raw depth. That's why it'll raise an error when you use scan_train folder.

divingwolf commented 2 years ago

oh, I think you should use Depths_raw with scan folder. When you use Depth folder, its scan_train folder included preprocessed depths which are already resized. In my code dtu_yao.py, I implemented downsampling from the raw depth. That's why it'll raise an error when you use scan_train folder.

Oh. I download the Preprocessed training datasets from Yao Yao github page. I didnt find the Depths_raw folder in the rar file from Yao's. Could u give me the downloading link of Raw data what u used in the project ?

TruongKhang commented 2 years ago

@jinshuitiaofan, please refer to CasMVSNet here