I tried running instant-ngp and compiled it with nvcc with sm75, which had to be set manually since for some reason it defaulted to sm88, which isn't found in any nvidia gpu.
Sadly, there are certain memory allocation issues, and the program thinks it's running out of memory way before it touches all of the actual memory of my GPU. I recorded it peaking at 4/16GB utilization.
Using tiny-cuda-nn with the Fully Fused MLP, it says:
WARNING GPUMemoryArena: GPU 0 does not support virtual memory. Falling back to regular allocations, which will be larger and can cause occasional stutter. Uncaught exception: FullyFusedMLP: insufficient shared memory available on the GPU. Reduce "n_neurons" or use "CutlassMLP" (better compatibility but slower) instead.
With the Cutlass MLP, the error changes to:
Uncaught exception: /run/media/david/sda1/instant-ngp/dependencies/tiny-cuda-nn/include/tiny-cuda-nn/cutlass_matmul.h:330 status failed with error Error Internal
I tried running instant-ngp and compiled it with nvcc with sm75, which had to be set manually since for some reason it defaulted to sm88, which isn't found in any nvidia gpu. Sadly, there are certain memory allocation issues, and the program thinks it's running out of memory way before it touches all of the actual memory of my GPU. I recorded it peaking at 4/16GB utilization. Using tiny-cuda-nn with the Fully Fused MLP, it says:
WARNING GPUMemoryArena: GPU 0 does not support virtual memory. Falling back to regular allocations, which will be larger and can cause occasional stutter. Uncaught exception: FullyFusedMLP: insufficient shared memory available on the GPU. Reduce "n_neurons" or use "CutlassMLP" (better compatibility but slower) instead.
With the Cutlass MLP, the error changes to:Uncaught exception: /run/media/david/sda1/instant-ngp/dependencies/tiny-cuda-nn/include/tiny-cuda-nn/cutlass_matmul.h:330 status failed with error Error Internal
any input would be much appreciated.
cheers, David