Hi, I have some questions about the inference speed of TensoRF.
Is this implementation of the TensoRF faster than the original repo? The original repo is implemented purely in Pytorch such that the inference speed is slow. Since this repo is implemented with cuda, should it be much faster?
Hi, I have some questions about the inference speed of TensoRF. Is this implementation of the TensoRF faster than the original repo? The original repo is implemented purely in Pytorch such that the inference speed is slow. Since this repo is implemented with cuda, should it be much faster?
According to performance-reference, the speed seems still not fast?
Many thanks!