Open Youngju-Na opened 1 year ago
@tau-yihouxiang
Hi, first of all, thanks for sharing this great work!
I have a question about Point-NeRF comparison results in the paper.
To my knowledge, Point-NeRF requires input images and builds initial neural point clouds with MVSNet that has not only 3D position but also image features (F dim) as an embedding (N, 3+F).
However, your method takes point clouds (N, 3). So my question is how did you build neural point clouds for Point-NeRF? Did you additionally use 2D images as an input?
Thanks in advance.
Hi, first of all, thanks for sharing this great work!
I have a question about Point-NeRF comparison results in the paper.
To my knowledge, Point-NeRF requires input images and builds initial neural point clouds with MVSNet that has not only 3D position but also image features (F dim) as an embedding (N, 3+F).
However, your method takes point clouds (N, 3). So my question is how did you build neural point clouds for Point-NeRF? Did you additionally use 2D images as an input?
Thanks in advance.
Point-NeRF can be trained using only Points (xyz), but the training & inference performance is much worse than "xyz + F" .
Hi, first of all, thanks for sharing this great work!
I have a question about Point-NeRF comparison results in the paper.
To my knowledge, Point-NeRF requires input images and builds initial neural point clouds with MVSNet that has not only 3D position but also image features (F dim) as an embedding (N, 3+F).
However, your method takes point clouds (N, 3). So my question is how did you build neural point clouds for Point-NeRF? Did you additionally use 2D images as an input?
Thanks in advance.