Open hcc0912 opened 2 months ago
I think there may be two possible reasons for this issue, one is that the shape of pers_img does not match, and the other is that the generation of "pred-512_13.pth" may have been incorrect.
I think there may be two possible reasons for this issue, one is that the shape of pers_img does not match, and the other is that the generation of "pred-512_13.pth" may have been incorrect.
- Check the shape of pers_img
- Generate a new "pred-512_13.pth"
I am using the 'pred-512_13.pth' file provided by you, so there shouldn't be any issue. I suspect it might be due to the shape of 'pers_img'. I want to know if you directly loaded the raw data using Matterport3d.py, or if there were any other operations involved. Because when I loaded the dataset according to your code and then ran evaluate2.py, the shape appeared to be different from the comments you provided.
Please delete line 111 in evaluate2.py depth_path = torch.unsqueeze(depth_patch, dim=0)
The code itself doesn't utilize the depth_path parameter, so deleting it doesn't seem to have changed the original execution; the same error persists.
I have thought that there might be an issue with the operations on depth_patch_map in Refine.py.
We don't have any additional operations.
This is the relevant parameter for our runtime.
We did not upload pred-512_13. pth After the above modifications, the incorrect pred-512_13. pth may also cause this error We will then upload this file
We did not upload pred-512_13. pth After the above modifications, the incorrect pred-512_13. pth may also cause this error We will then upload this file
Could you please grant access to the permission of pred_512_13 for me? I encountered access denial when attempting to access it. Thank you.
We have uploaded pred-512_13.pth. Now you can download it
We have uploaded pred-512_13.pth. Now you can download it
First of all, thank you very much for all your previous answers. Additionally, I was wondering if I could trouble you to provide the complete code, including the train.py, etc. I'd like to fully implement your research contribution.
You can refer to the training methods in https://github.com/alibaba/UniFuse-Unidirectional-Fusion
You can refer to the training methods in https://github.com/alibaba/UniFuse-Unidirectional-Fusion
Could you please provide the relevant code for the loss function?
When I run evaluate2.py, the following error occurs. Is there a problem with the code? Traceback (most recent call last): File "F:\Hcc_2\MODE-main\evaluate2.py", line 129, in
main()
File "F:\Hcc_2\MODE-main\evaluate2.py", line 112, in main
outputs = model(equi_inputs, depth_patch, roll_idx, flip)
File "C:\ProgramData\Anaconda3\envs\hcc_envs\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "F:\Hcc_2\MODE-main\networks\Refine.py", line 84, in forward
depth_init = pers2equi(depth_patch, self.fov, self.nrows, (self.patch_size, self.patch_size), (self.equi_h//4, self.equi_w//4), "pred_512_13", roll_idx, flip) #bs, 1, 512, 1024, 18
File "F:\Hcc_2\MODE-main\networks\P2E.py", line 172, in pers2equi
Ia = pers_img[:, :, y0, x0, z]
IndexError: too many indices for tensor of dimension 4
![QQ图片20240409172103](https://github.com/lkku1/MODE/assets/113153102/7c53f13b-93b2-4f78-b102-182f8b5e2659)