creiser / kilonerf

Code for KiloNeRF: Speeding up Neural Radiance Fields with Thousands of Tiny MLPs
471 stars 52 forks source link

The number of layers and hidden dim are fixed in the CUDA implementation #19

Closed licj15 closed 2 years ago

licj15 commented 2 years ago

Hi Christian @creiser !

Thank you a lot for the great work and the open-sourced code! It is very helpful in the NeRF community!

I noticed that the number of layers and hidden dims seems to be fixed in the CUDA implementation, regardless of the number of layers and hidden dim we set in the .yaml file.

If I understand it correctly:

Thank you!

creiser commented 2 years ago

We fixed that here because the compiler can only store arrays whose size is known at compile-time in registers. You can add more layers by adding more matrix multiplications in network_eval.cu. A more general solution would be to use meta-programming in order to generate code for a variety of hyperparameters. Different number of hidden dims are supported by simply adding additional cases here: https://github.com/creiser/kilonerf/blob/master/cuda/network_eval.cu#L278-L291

You cannot indefinitely increase the param count since with the current implementation it is required that a network fully fits into shared memory and this is crucial for performance. For instance, a GTX 1080 Ti has 96 KB of shared memory, so you can fit a maximum of 24000 parameters. The default architecture only requires 6212 parameters. Also by increasing the hidden dim too much you would use too many registers which limits occupancy.