city-super / BungeeNeRF

[ECCV22] BungeeNeRF: Progressive Neural Radiance Field for Extreme Multi-scale Scene Rendering
https://city-super.github.io/citynerf
MIT License
565 stars 61 forks source link

Pretrained model available? #3

Closed evelinehong closed 2 years ago

evelinehong commented 2 years ago

Hi, Do you have pretrained model / checkpoint? Also, what 's the training time for a scene?

Thanks

kam1107 commented 2 years ago

Hi, the training time depends on how you partition the scales. For example, on 56Leonard scene, we use a 4-stage training. The training time on a single A100 card is approx 2.5-3hr for stage0 and 1(until convergence). For stage2 and 3, it can take 12-15hr to train. Checkpoints will be released soon.

bityangke commented 2 years ago

i train on a single A100 card with the default setting, it took 11.5 hours to train stage0 on 56Leonard (300000 iter).

kam1107 commented 2 years ago

30w iters is more than enough for stage0 to converge. You can stop stage training early when the curve tends to converge (or already converges).

Wei-Baldwin-Zeng commented 2 years ago

Hi,great work!!!!!!

Is there any tricks that I should pay attention to when training the model? It took me almost~ 4 days to train, but the results didn't reach what showed in the paper, but almost like the ones for the original NeRF showed in your paper (blur around the boundary and lost details).

For example, is the cofigs keep the same for the 4 stages? Any other parameters need to change for different stages?

Really appreciate your reply.

kam1107 commented 2 years ago

What's the PSNR you got for each stage? You may try to increase learning rate to see if it helps. Also remember to load checkpoint from previous training stage when switching to a new scale.

Wei-Baldwin-Zeng commented 2 years ago

Thank you for your quick reply. Yes, I've make sure to load the checkpoint from the previous training stage.

For each stage, the PSNR was around 22.5~22.7 (56leonard), and the render_test results, visually somehow look like the original NeRF in the Figure 5 in your paper, blurry artifacts at edges.

I haven't change anything in the config for each stage, just copy to 4 configs for 4 training stages. So I wonder if there is anything I actually need to change when switching to a new scale.

Now I'm trying to train each stage longer to see if there is any difference. Also looking forward to your pretrained model so I can direct see the results.

Best regards,

kam1107 commented 2 years ago

I think there's a mistake in render_path function. I manually assigned the stage variable to 0 in function render(..). Can you try to replace it with e.g. stage=3 to see if you can get better results? I will correct this later.

Wei-Baldwin-Zeng commented 2 years ago

Hi,

I have corrected the mistake you mentioned in render_path, still, the rendering results looks not as good as in the paper.

In total, using a V100, I trained around 1week, 4 stages, for each stage, the PSNR is around 23.2. But the rendering results still has the same blurry artifacts at edges.

kam1107 commented 2 years ago

Can you show some examples? My retrain results look fine 🤔

kamen007 commented 2 years ago

Can you show some examples? My retrain results look fine 🤔

Is the result in your paper trained with original image or downsampling? (is factor 3 or other value used?) . 3Q~

kam1107 commented 2 years ago

Can you show some examples? My retrain results look fine thinking

Is the result in your paper trained with original image or downsampling? (is factor 3 or other value used?) . 3Q~

Those are trained on downsampled images as specified in the config files

kam1107 commented 2 years ago

Pretrained models of 56leonard and Transamerica have been uploaded to the data drive: https://drive.google.com/drive/folders/1W4iEUyTe6YaNVS0ou9DRgmYV5zSAXDIn?usp=sharing. Take a look if you are interested :)