Open ghy0324 opened 2 years ago
I have the same questions. It seems visualize_nerf_atlas_radiance.py
can not reproduce the original appearance of 114. Would you mind providing more details about how to run visualize_nerf_atlas_radiance?
The visualize_nerf_atlas_radiance.py
will not reproduce the original appearance since volumetric rendering is still needed. However, the texture maps do not seem right (it should be similar to the ones shown in the paper). Could you send me the trained model that produces accurate rendering but incorrect textures?
Hi, @fbxiang, thanks for your kind reply.
Following the instruction of README, I trained NeuTex from scratch on the scene of DTU/scan114. But texture map and mesh from visualize_nerf_atlas_radiance.py
seem very strange. You can download the logs files from the following link:
https://drive.google.com/drive/folders/16JgNqxIrb1z7HNN8s0ijC3JC18C6xWzh?usp=sharing
Was the question solved?
After some investigation, I cannot reproduce @ghy0324 's issue. So I guess maybe the visualization script is not correctly used. However, it can be used by simply replacing the python filename in the shell script. I have pushed a visualization script to do the visualization. After 80000 steps (which is not well trained), I got the following mesh and texture, which look reasonable.
@huizhang0110 's issue seems very different. I believe the geometry training got stuck in by visualizing the mesh. I was able to reproduce this issue by repeatedly running the script for many times. The main reason is that this repository does not contain the code for pretraining to fit the point cloud as described in the paper (that part of the code contains custom kernels and is a bit hard to run out-of-the-box). I may not be able to integrate it soon since I am not working on this topic now. For now, maybe simply re-train it would resolve the issue.
Hi, @fbxiang , thank you for sharing the code. We run the DTU scan_114 with the default setting and dataset in this repo, but the rendered image does not seems to converge well, even after training for 500000 iterations. Can you give us some advices?
The most important parameter to tune is changing sample_num from 64 to 256 (that is the setting in the paper, but you need large GPU memory for it), and I recommend increasing/decreasing random_sample_size to fill your GPU memory for faster training.
Hi, thanks for the amazing work! I train the network on scan 114 data and got accurate rendering results like this: However, when I run visualize_nerf_atlas_radiance.py to visualize the geometry and texture map, I found they are strange. point cloud: mesh: texture map: