Open iraj465 opened 1 year ago
I found that resizing images and then training makes the convergence much slower as compared to keeping the original resolutions
This is not something we have observed. For example, try changing the training resolution in the bob config, from 512x512 to 1024x1024.
Also, we rescale the NeRD datasets before training, as shown here: https://github.com/NVlabs/nvdiffrec/blob/main/data/download_datasets.py
In general, we recommend training at as large as possible resolution (given your GPU) as that will improve texture and geometry quality.
Hi, I found that resizing images and then training makes the convergence much slower as compared to keeping the original resolutions, but for higher resolutions i get CUDA OOM errors. Is there a tradeoff that hits the sweet spot or is there an alternative for this?
Any help is appreciated