Closed jessey-git closed 3 years ago
If you want to retrain the network for hdr, hdr+alb, and hdr+alb+nrm as well, then yes, you need to train the network 3 times. But you need to train only the combinations you actually need.
You need to run preprocessing only once, with all the features you want to support (e.g. hdr, alb, nrm). Any feature included in the preprocessing step will be available for training, it doesn't matter what else was included for preprocessing.
Since you will get separate weight blobs for each combination, yes, you have to pass the appropriate blob.
The file names in the validation data sets should have the same format as the training dataset. The Datasets section in the documentation describes datasets in general, not just for training. Apart from the filenames, there is no strict requirements regarding the sample counts or contents of the images. It's not even necessary to have a validation dataset. But in general, the validation images should differ from the training ones as much as possible. So ideally you should use different scenes but it's not strictly necessary.
Just a few questions on attempting to train using a custom data set. Might be good to update the README if you see fit afterwards.
For context, assume that just the HDR weights are required (and not the LDR/lightmap ones).
It seems like training should occur 3 times for each data set: once for hdr, once for hdr+alb, and then once again for hdr+alb+nrm. Is that the case?
Does pre-processing need to occur 3 times as well for each of those combinations separately?
Is it up to the application to pass in the appropriate weight blob depending on which combination of passes are used?
What does the validation (rt_valid) data set look like?