Closed qkgia closed 2 years ago
Could you share the input images to reproduce this? Thanks!
Closing due to inactivity
Sorry for the late reply. I tried images from this dataset. https://iccv2021-mmp.github.io/subpage/dataset.html
Thanks for the message!
1) Which pytorch and detectron2 versions are you using?
2) Could you please print out the size of _embi and _embj before the cdist operation in line 76 in planeformers/models/inference.py and share that?
3) Do you have both the PlaneFormer checkpoint and SparsePlanes backbone checkpoint stored under PlaneFormers/models?
@samiragarwala
@samiragarwala Here are the results of a successfully with 2 input images. However, the visualization of the results looks weird. https://1drv.ms/u/s!Arx_-0rtHxmn5FZKKI_9kFig0ah5?e=qpCcTL
Thanks for sharing all this information! To run more than two images, you don't need to modify anything except providing more input images files while running the model using run.py
.
Could you try running the system on images from Matterport3D which can be found at https://github.com/samiragarwala/PlaneFormers/tree/gh-pages/resources?
@samiragarwala Here are the results of a successfully with 2 input images. However, the visualization of the results looks weird. https://1drv.ms/u/s!Arx_-0rtHxmn5FZKKI_9kFig0ah5?e=qpCcTL
I have updated the visualization code and it should work now!
@samiragarwala I tried images from Matterport3D. It runs ok. Do I need to re-train the model for other types of images? The visualized images look correct now, but the texture map for the .obj still looks weird.
Could you share your output? In what sense does it not look right? I can look into it!
I also meet this error when I try to reproduce your code on several images from scannet_v2. Could you please provide some advice on how to handle this? My emb_i and emb_j shape output like follows: 0 torch.Size([5, 128]) torch.Size([5, 128]) 1 torch.Size([5, 128]) torch.Size([128])
Thanks for sharing your code. I got the following error when running inference code.
File "PlaneFormers/planeformers/models/inference.py", line 76, in build_connectivity_graph dist_mat = torch.cdist(emb_i, emb_j) File "lib/python3.7/site-packages/torch/functional.py", line 1153, in cdist return _VF.cdist(x1, x2, p, None) # type: ignore[attr-defined] RuntimeError: cdist only supports at least 2D tensors, X2 got: 1D