Open DavidRecasens opened 2 years ago
Hi @DavidRecasens, have you found some solutions?
Hi @DavidRecasens, have you found some solutions?
Sadly, I haven't. The fact that I don't get any answer from the authors makes me suspect that this number is determined by simple trial and error.
Hi @DavidRecasens, have you found some solutions?
Sadly, I haven't. The fact that I don't get any answer from the authors makes me suspect that this number is determined by simple trial and error.
I tried several offset values (0.5, 1.0, 1.5) on the demo sequence and found the output meshes differed from each other as the following figure. The mesh quality depends on this offset, so I 'm also expecting someone could share a solution or some suggestions to caputure the custom data.
Exactly, the output quality depends so much on that hyperparameter. Not knowing how to determine this offset is a problem if you want to try it on any other dataset. You have to do trial and error, but it may happen that you can not find the magic number (like in my case).
I think NeuralRecon cannot generate mesh in the area z < 0. From the comment, I assume this is because ScanNet has mesh data only in the area z >= 0 and NeuralRecon is trained with that.
I resolved this problem with 2 steps.
Hope this helps!
i try it on 7-scenes, but it was not very well,i process data like demo of processARkit, i don't know where is wrong ,by the way,in 7-scenes ,if add a 1.5 on z,get much worse results. here is the result of one scene in 7-scenes ,without add a 1.5 on z. while the gt is: maybe by change the camera pose format will make it work ,but how to do that or where i make it wrong.
some other data result here: https://github.com/zju3dv/NeuralRecon/issues/42#issuecomment-1059918298
by
Hello, I'd like to know how you ended up converting the coordinates on the 7scene dataset to get a look good result?Or what different transformations are made for each scene.
i try it on 7-scenes, but it was not very well,i process data like demo of processARkit, i don't know where is wrong ,by the way,in 7-scenes ,if add a 1.5 on z,get much worse results. here is the result of one scene in 7-scenes ,without add a 1.5 on z. while the gt is: maybe by change the camera pose format will make it work ,but how to do that or where i make it wrong.
some other data result here: #42 (comment)
Hi, how did you get your 7-scenes GT mesh? I just saw the TSDF volume on the official website.
i try it on 7-scenes, but it was not very well,i process data like demo of processARkit, i don't know where is wrong ,by the way,in 7-scenes ,if add a 1.5 on z,get much worse results. here is the result of one scene in 7-scenes ,without add a 1.5 on z. while the gt is: maybe by change the camera pose format will make it work ,but how to do that or where i make it wrong. some other data result here: #42 (comment)
Hi, how did you get your 7-scenes GT mesh? I just saw the TSDF volume on the official website.
It's been a while since the time passed. I remember there were several chapters of pictures in GT, and this is one of them
i try it on 7-scenes, but it was not very well,i process data like demo of processARkit, i don't know where is wrong ,by the way,in 7-scenes ,if add a 1.5 on z,get much worse results. here is the result of one scene in 7-scenes ,without add a 1.5 on z. while the gt is: maybe by change the camera pose format will make it work ,but how to do that or where i make it wrong. some other data result here: #42 (comment)
Hi, how did you get your 7-scenes GT mesh? I just saw the TSDF volume on the official website.
It's been a while since the time passed. I remember there were several chapters of pictures in GT, and this is one of them
Thanks for the quick reply. We visualized the TSDF transformed mesh as well as the camera position and normalized the TSDF transformed coordinates to [-1500, 1500] and tried +/- 1500 for the z-axis, and found that it still doesn't match the camera position as follow pic. Any suggestions please?
Hi!
First congrats on your amazing work! I'm struggling with the problem that occ has all negative values when I ran it with my custom data.
I'm pretty sure the coordinate system and format of the camera poses are aligned with the ARKit demo. I think the problem is related with the offset that you apply to the x-y plane of the camera poses. I tried using an offset to put x-y plane at 1.5 meters over the ground, but it didn't work. With some specific numbers (that I discover randomly) NeuralRecon is able to reconstruct a couple of frames, but then the same problem appears.
I've seen in other issues that we have to configure the offset to simulate the conditions of ScanNet data during training, but you never specify which is that criterion. Therefore I wanted to ask you that, which is the criterion you follow to determine you need that offset of +1.5 meters?
Thanks! :)