Open junmin98 opened 1 year ago
Hello, sorry for the late reply.
I learnt from this link that img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
is needed because img = cv2.imread(PATH2EXR, cv2.IMREAD_ANYCOLOR | cv2.IMREAD_ANYDEPTH)
reads exr file in bgr order (cv2.imwrite saves a buffer inversely to rgb order to a exr file). I am not sure if it is the reason. You can test it or compare cv2 to pyexr.
Installilng pyexr library:
Windows:
Linux:
Besides, I will try to find out where other two checkpoint files located. And please let us know if there are other problems.
Hello. First of all, thanks for sharing the code.
However, when I trained the model on my own, neither quantitative nor qualitative results were satisfactory. for quantitative results, I got a (for example) Classroom: 30.524 (PSNR) / 0.967 (SSIM) Livingroom: 32.248 / 0.971
When I look at the learning trend, I think the loss is decreasing well and the validation score is increasing well.
What I changed when training the model is "train_crops_every_frame=77-->80" and, Instead of using PyEXR, input was received using "cv2.imread(img_path, cv2.IMREAD_ANYCOLOR | cv2.IMREAD_ANYDEPTH)". However, in the case of the latter (cv2), it is not expected to cause any special problems because the same scores as the paper's quantitative results is obtained when tested with your checkpoints.
So can you upload the checkpoints for the rest of the scenes (Sponza and Sponza glossy) you haven't uploaded? or can you let me know if there are any anticipated problems?
Any answers or sharing checkpoints would be greatly appreciated!