dvlab-research / TriVol

The official code of TriVol in CVPR-2023
40 stars 1 forks source link

Question about Point-NeRF comparisons in the paper #4

Open Youngju-Na opened 1 year ago

Youngju-Na commented 1 year ago

Hi, first of all, thanks for sharing this great work!

I have a question about Point-NeRF comparison results in the paper.

To my knowledge, Point-NeRF requires input images and builds initial neural point clouds with MVSNet that has not only 3D position but also image features (F dim) as an embedding (N, 3+F).

However, your method takes point clouds (N, 3). So my question is how did you build neural point clouds for Point-NeRF? Did you additionally use 2D images as an input?

Thanks in advance.

Youngju-Na commented 1 year ago

@tau-yihouxiang

forestsen commented 1 year ago

Hi, first of all, thanks for sharing this great work!

I have a question about Point-NeRF comparison results in the paper.

To my knowledge, Point-NeRF requires input images and builds initial neural point clouds with MVSNet that has not only 3D position but also image features (F dim) as an embedding (N, 3+F).

However, your method takes point clouds (N, 3). So my question is how did you build neural point clouds for Point-NeRF? Did you additionally use 2D images as an input?

Thanks in advance.

Hi, first of all, thanks for sharing this great work!

I have a question about Point-NeRF comparison results in the paper.

To my knowledge, Point-NeRF requires input images and builds initial neural point clouds with MVSNet that has not only 3D position but also image features (F dim) as an embedding (N, 3+F).

However, your method takes point clouds (N, 3). So my question is how did you build neural point clouds for Point-NeRF? Did you additionally use 2D images as an input?

Thanks in advance.

Point-NeRF can be trained using only Points (xyz), but the training & inference performance is much worse than "xyz + F" .