Totoro97 / NeuS

Code release for NeuS
MIT License
1.55k stars 210 forks source link

Experimenting with multires hash encoding (inspired by NVIDIA Instant-NGP) #65

Open jenkspt opened 2 years ago

jenkspt commented 2 years ago

Thanks for the great research + code! I wanted to share some experiments I did replacing the nerf models from NeuS with a pytorch re-implementation of the multi-res hash encoding from NVIDIA's instant neural graphics primitives research. https://github.com/jenkspt/NeuS Some initial experiments have the new model training about 3.5x faster.

Totoro97 commented 2 years ago

Thank you for the sharing! Looks great!

silence401 commented 2 years ago

I also replace the neus encoding by ngp hashgrid encoding, the color is quickly converge to a good result, but the mesh is very bad. Is your mesh good as neus'?

jenkspt commented 2 years ago

I did notice that the mesh didn't look great, but I also noticed the mesh for the original NeuS didn't look great either in my training runs. I just used the default mesh settings -- so I'm sure those could be tuned for the scene (it looked like the grid resolution was way to low). They do fit high quality SDFs in InstantNGP (they also do observe a small amount of noise on the surface) -- so I don't think this representation is inherently limited for this type of problem. I'm guessing a larger table size and/or longer training will improve the mesh.

Haonan-DONG commented 2 years ago

Also did the same thing by using the hash grid from tiny-cuda-nn instead of the frequency encoder and the mesh is bad either.

For this problem, Go-Surf gives a solution to add an explicit smooth term (https://jingwenwang95.github.io/go_surf/). However, the mesh is not as smooth as what NeuS outputs with this term after I add it. Moreover I notice that the mesh is worse without the mask loss.

Currently I believe that the sampling strategy should be changed with Instant-NGP.

ZirongChan commented 2 years ago

@jenkspt thx for sharing the ngp-implementation. I tried your code (exactly the same conf for the neus thin_cube data), the color converged faster than NeuS, as expected, but the rendered image is more noisy, look like this: 00005000_0_1 which lead to a very lousy mesh. Do you have any suggestion on tuning the conf parameters such as network depth or something else?

@Haonan-DONG can you go for more detail about the sampling strategy? Have you tried any? How did they work?

jenkspt commented 2 years ago

I added a hash config file: confs/womask_hash.conf. And trained the model with python exp_runner.py --mode train --conf ./confs/womask_hash.conf --case thin_cube

ZirongChan commented 2 years ago

I added a hash config file: confs/womask_hash.conf. And trained the model with python exp_runner.py --mode train --conf ./confs/womask_hash.conf --case thin_cube

@jenkspt thx for your reply. By saying "exactly the same conf for the neus thin_cube data", I meant I used your womask_hash.conf file. Latest update is, As the training processed, I have a almost no noise rendering result on nearly 50K iters, but still, the mesh looks very bad. There are 3 questions that bother me: 1) Why would I get different result on 5K iters with you since we use the same config file, as far as I know that the only different thing is the machine, but despite of the processing speed, the output quality should be the same. 2) The meshes are still very bad, with tons of noise. 3) I have approximately 4.3 iters/s on original NeuS with batch size 512, but 1.12s/iter, namely 0.9 iters/s with batch size 1024, I assume that this is like 2.5x slower per iter. Although the color did converge faster, the training took me way too much time. Is there anywhere I can check?

Any suggestions would be very much appreciated.

jenkspt commented 2 years ago

Like I mentioned earlier -- I didn't get a good mesh from the hash model (or the original model). The normal images do look good however. Not sure why it's training so slow for you -- but I put the logs for my run on google drive: https://drive.google.com/file/d/1pU7syj-GlVZPSHQmte0W94aNYSzDBTxS/view?usp=sharing Hope that helps

jaymefosa commented 2 years ago

You can get a cleaner mesh if you call the validate function explicitly. The checkpoint meshes generated during training might be using different parameters.

Even though the core mesh looks fine, the surrounding area has a lot of noisy clouds.

image

jenkspt commented 2 years ago

Looks Good!! Adding some type of sparsity regularization might help with the "ghost blobs" -- This is a problem with the NVIDIA NeRF version as well. Looks like most of the blobs aren't even visible from any of the cameras.

jenkspt commented 2 years ago

@ZirongChan by the way I trained this on my GTX 1080 Ti with 11 GB of memory. The NVIDIA paper mentioned something about the performance degrading if the table size is to large (assume this is GPU dependent). You could try dropping the table size -- but requires hard coding it, as I didn't add any options to the CLI. The MultiresEncodingConfig is here: https://github.com/jenkspt/multires-hash-encoding-pytorch/blob/20fb5d2f7b33572ea2b924bb048073dd160be547/src/multires_hash_encoding/modules.py#L85-L91 And you can modify the default parameters here: https://github.com/jenkspt/NeuS/blob/485f7ab58218af1a999d7a688cdf9fb1ddcaa55e/models/hash_fields.py#L23 https://github.com/jenkspt/NeuS/blob/485f7ab58218af1a999d7a688cdf9fb1ddcaa55e/models/hash_fields.py#L81

catapulta commented 2 years ago

@jenkspt thank you for sharing the ngp implementation. Wanted to report I do get high quality mesh results with the original implementation, but found the same issues as @ZirongChan regarding the ngp mesh.

ZirongChan commented 2 years ago

@jenkspt thx for your advise, I will give it a try ASAP. Actually I can get a result similar to @jaymefosa 's now, by sparsity regularization do you mean something similar to the Eikonal loss

jaymefosa commented 2 years ago

@ZirongChan or maybe a sparsity regularizer similar to what they do in MipNerf360?

fireholder commented 2 years ago

I added a hash config file: confs/womask_hash.conf. And trained the model with python exp_runner.py --mode train --conf ./confs/womask_hash.conf --case thin_cube

@jenkspt thx for your reply. By saying "exactly the same conf for the neus thin_cube data", I meant I used your womask_hash.conf file. Latest update is, As the training processed, I have a almost no noise rendering result on nearly 50K iters, but still, the mesh looks very bad. There are 3 questions that bother me:

  1. Why would I get different result on 5K iters with you since we use the same config file, as far as I know that the only different thing is the machine, but despite of the processing speed, the output quality should be the same.
  2. The meshes are still very bad, with tons of noise.
  3. I have approximately 4.3 iters/s on original NeuS with batch size 512, but 1.12s/iter, namely 0.9 iters/s with batch size 1024, I assume that this is like 2.5x slower per iter. Although the color did converge faster, the training took me way too much time. Is there anywhere I can check?

Any suggestions would be very much appreciated.

I found the same issues as you, could you share how did you solve them?

ZirongChan commented 2 years ago

@fireholder sry I have not spent much time on this issue yet. Maybe you can try as @jaymefosa suggested.

Holmes-Alan commented 1 year ago

@jenkspt Hello, I used your code to train on my own dataset. I can see from the validation that the model is well-trained for novel view rendering. However, the obtained mesh is a mesh. What do you think the problem is?