Open ghost opened 6 years ago
Yes, training data is also all gray scale.
We use a randomly gray-scale conversion.
img_gray=random0to1_R*img_Red+ random0to1_G *img_Green+ random0to1_B *img_Blue
(Constraint: random0to1_R +random0to1_G+random0to1_B = 1, min(img_gray)=0 , max(img_gray)=1 )
I found that gray-scale conversion improves overall performance quite significantly perhaps because it reduces the overall complexity while conserving the light field structure.
That'll explain it. Thank you!
Hello Changha Shin. In the paper, you mentioned that methods such as randomly converting color to gray scale from [0,1] is used. However, by reviewing the "epinet_fun/func_makeinput.py" I notice that it've already converted the image into gray scale before returning the data. Is the training data all gray or just part of it? Is this function only applied in the test process or in the training as well? More details in augmentation would be appreciated. Thank you for your great work.