creiser / kilonerf

Code for KiloNeRF: Speeding up Neural Radiance Fields with Thousands of Tiny MLPs
471 stars 52 forks source link

KiloNeRF CUDA Extension Documentation or Usage Quickstart #2

Open cameronosmith opened 3 years ago

cameronosmith commented 3 years ago

Hi! You produced some awesome work obviously. For my application I need to render many MLP's with variable sized inputs as you did here. Do you have any instructions for a quickstart or documentation on the kilonerf_cuda extension's usage?

creiser commented 3 years ago

Hi! We will work on extending the documentation and making the implementation is less entangled with NeRF. In principle there are only a couple of steps required to transform your MLP into a MultiMLP:

(1) Replace the Linear layers with MultiLinear layers. You'll still have a lot of design flexibility and can use the usual layers for activation functions, etc. (compare MultiNetwork) (2) Give the forward pass of your network the "batch_size_per_network" argument, which encodes the number of points that are processed by the individual networks (compare MultiNetwork) (3) Query your network as in "query_multi_network": here reordering of inputs is performed such that subsequent points are processed by the same network. Outputs are backordered.

The above steps are necessary for an implementation that supports efficient training. I recommend to start with efficient training and use that implementation also for rendering at first to check that everything works correctly. The implementation for efficient rendering is more complex and quite domain-specific.

Can you provide some more details? Do you have trouble with the installation? Is it still NeRF or an entirely different context? If it is still quite close to NeRF, I'd recommend to try to modify this code here and instead of starting from scratch.