Closed Stephenfang51 closed 1 year ago
Thanks for your interest in our work. The inference speed on gpu is given in our paper. But the inference speed on the cpu cannot be calculated because we did not write the multi-view rendering code on the cpu.
@ME495 Thanks for replying as for my understanding, if we train a student confidence Network and we can easily move the whole pipeline to another platform (like running on CPU) without CUDA support since we don't need to render the input depth img to pointcloud right ?
No. Using the student confidence network also requires rendering the point cloud to other views. The difference between the student confidence network and the teacher confidence network is that the teacher confidence network renders a large number of views first and then selects views, but the student confidence network first selects a small number of views and then renders selected views.
Nice work ! Have you made the speed comparison ? how's the speed for student network? Thanks !