JOP-Lee / READ

AAAI2023,implementation of "READ: Large-Scale Neural Scene Rendering for Autonomous Driving", the experimental results are significantly better than Nerf-based methods
https://github.com/JOP-Lee/READ-Large-Scale-Neural-Scene-Rendering-for-Autonomous-Driving
GNU General Public License v2.0
447 stars 55 forks source link

about pointcloud input #46

Closed smithrowe10 closed 1 year ago

smithrowe10 commented 1 year ago

I want to ask if there was no pointcloud "xyz" information input to network, only it's size?

zzxxtt commented 1 year ago

Also want to know about this. After read the code, I think the input of network is just the index of point in the pointcloud. The depth is used to "Screen out occluded point clouds". The gl version of render is more complex, but if read the headless version of myrender, it only output index_buffer, depth_buffer and not use others information, rgb, normals and so on?

Sylvia6 commented 1 year ago

I agree with your viewpoint @zzxxtt , in our READ method, it's true that the point "xyz" is just used for rasterization. By the way, the index of the point is the key to obtaining descriptors value, resulting the viewpoint-related descriptor maps, which are the input of our neural rendering network.