kwea123 / ngp_pl

Instant-ngp in pytorch+cuda trained with pytorch-lightning (high quality with high speed, with only few lines of legible code)
MIT License
1.26k stars 156 forks source link

Have a bad result when training with lower resolution #94

Open william2ai opened 1 year ago

william2ai commented 1 year ago

Hi, your code achieves a great result when using default resolution or resolution near to the default value, however, I find it vague using the same parameter (learning rate, batch_size) to train a lower resolution(ex.200*200). It seems that a lower learning rate and batch_size has a better effect(I'm not quite sure). Do you have any suggestion on how to adjust those parameters? Or you achieved a good result on any resolution. Thanks. 000_sr

kwea123 commented 1 year ago

No, I always trained on 1/4 resolution for llff, and whole resolution for blender, for lower resolution I think parameter tuning is required as you said

william2ai commented 1 year ago

It seems that I need to reduce batch_size and learning rate, is that correct? BTW, may I ask why train PSNR is a lot higher than test PSNR?

william2ai commented 1 year ago

@kwea123 Do you think the learning rate strategy might affects. For in jittor-nerf, I saw them use adam for the lr strategy, whereas in your code, I saw the cosineannealingLR for the strategy.

kwea123 commented 1 year ago

may I ask why train PSNR is a lot higher than test PSNR

I think for llff it's because there are two few images (only 17 for fern) so it's prone to overfitting

I think for lower resolution you would need some technique in mipnerf360, since a pixel contains a lot of space but is only represented as 1 color, that leads to overfitting as well. One easy strategy I can think of is to randomize the ray inside that pixel, but I don't think it improves the score a lot either.

Why would you want to train at a lower resolution (200x200) at the first place though? In NGP everything runs pretty fast and I don't see reason training at lower resolution

freemercury commented 1 year ago

May I ask how to "import vern" in networks.py ? I can't find this module anywhere.

Parousiacy commented 1 year ago

Hi,I have a similar question about training with different resolution training. For the classic Lego scene, I find that 800 is the best training resolution no matter what the test image resolution is, more than 800 or less.

william2ai commented 1 year ago

Hi,I have a similar question about training with different resolution training. For the classic Lego scene, I find that 800 is the best training resolution no matter what the test image resolution is, more than 800 or less.

@Parousiacy I tried to adjust learning rate, and it made effect, but still seems a bit unfavorable. Maybe we can share opinions by email or wechat

william2ai commented 1 year ago

Thanks a lot for your kind advice on mipnerf360! I would try

Why would you want to train at a lower resolution (200x200) at the first place though?

I'm trying to do some task on low resolution image, something like image restoration. But I still didn't figure out did learning rate affect? For in other version of nerf code, the learning rate tends to be decreased from 10e-4 to 10e-6. But for sure, much space with few information is somehow one of the main problems! I'll conduct some experiment.

Parousiacy commented 1 year ago

Hi,I have a similar question about training with different resolution training. For the classic Lego scene, I find that 800 is the best training resolution no matter what the test image resolution is, more than 800 or less.

@Parousiacy I tried to adjust learning rate, and it made effect, but still seems a bit unfavorable. Maybe we can share opinions by email or wechat

That's great. My Email address is parousiacy@outlook.com.