NVlabs / instant-ngp

Instant neural graphics primitives: lightning fast NeRF and more
https://nvlabs.github.io/instant-ngp
Other
15.83k stars 1.9k forks source link

Error "cutlass_matmul.h:332 status failed with error Error Internal" on freshly compiled Instant-ngp #1366

Open MattRM2 opened 1 year ago

MattRM2 commented 1 year ago

Hi,

I just compiled Instant-ngp and I have this error when I tried to load the Fox exemple (see screenshot). I had the same with the build from NVlabs but know it's compiled for my system.

Instant-ngp_Error

Do you have a solution ? Please, a real solution not just deactivated all my card from my system but the GTX 970. All my system is detected correctly, so please not a fix like "turn off all the graphics cards and keep only one". This is not a solution, it's a bad workaround.

Thank you,

Matt

david9ml commented 1 year ago

same issue here

weizizhuang commented 1 year ago

same issue!

LudovicoYIN commented 1 year ago

did someone figure it?

MattRM2 commented 1 year ago

Right now, no.

blurgyy commented 1 year ago

Fwiw, I got the same error ("cutlass_matmul.h:332 ...") with the binary built at commit 090aed613499ac2dbba3c2cede24befa248ece8a, but building at commit 6ee7724370b5e7aa9d514a8c36fae78c112706d9 and dragging a source file (a directory or a transforms.json) into the GUI window works properly. The weird thing is that using the command line to specify the scene, e.g. ./instant-ngp --scene data/nerf/fox still gives the above error, the error can only be avoided if I drag the source into the GUI window.

jx-99321 commented 7 months ago

This is related to your machine GPU and train batch size, GPU70 (maybe) and below architectures prevent the use of Fully Fused MLP and only cutlassMLP can be used, while the train batch size is 18+, cutlassMLP will report this error! @david9ml @MattRM2 @weizizhuang @blurgyy @LudovicoYIN

jx-99321 commented 7 months ago

my settings is GPU 4060Ti (89 architecture),training_batch_size=21.