Open JingleiSHI opened 5 years ago
The provided pretrained weights in Github is same with the weights(epinet9x9, epinet5x5) in our paper. But we didn't use a ensemble technique in github, the performance is little different with the performance in the paper. Thanks.
Thank you very much for your response, have you ever tested your method on old HCI data(stilllife, buddha, butterfly, monasRoom)? I have found that the estimated disparity map has many artifacts for these scenes, so I'm not sure this is due to the model itself or other reasons? Yours sincerely, Jinglei
I just tested it, and I think it works well with pretrained weights(9x9).
The ordering(numbering) of old HCI dataset is little different with ours.
So you need to convert them into our data format like below.
f=h5py.File('stillLife/lf.h5','r')
LF=f['LF']
// load LF images (768,768,9,9,3)
LF=np.transpose(LF,(2,3,0,1,4))
// convert to (9,9,768,768,3)
LF=LF[:,:,:,::-1,:]
// reverse order
LF_our_format=LF[np.newaxis,:,:,:,:,:]
// add one dimension
from epinet_fun.func_generate_traindata import generate_traindata512
(val_90d , val_0d, val_45d, val_M45d,_)=generate_traindata512(LF_our_format, np.zeros(LF_our_format.shape[:-1]), Setting02_AngualrViews)
Hi Dr. Shin, Thank you very much for your advice, I'll retest these scenes. If this doesn't bother you, could you please send me the disparity map of stillLife you have got, which can help me to verify if I give input images order correctly. And could you please give me your model 7x7 (jlshi@outlook.com), I'm very interested in the performance evolution when increasing stream length. Thank you for your attention. Yours sincerely, Jinglei SHI
Diparity result of stillLife -->stillLife_9x9.zip Sorry, we have the checkpoint file only with 5x5 and 9x9 viewpoints. I don't know where it is, I couldn't find it... Now I'm re-training the model, and I will upload it soon.
Thank you very much!
Sorry Dr.Shin, I have another question about your paper, in your paper, I found that you compare with method 'Neural EPI-volume Networks' of Stefan Heber, where did you find their code source and dataset? I have searched them but didn't find them. Thank you for your attention.
We emailed him to request their code and dataset, and received the link for the dataset.
Hi Dr. Shin, Have you ever tried to train the model without excluding reflection and refraction regions? and I found that in your paper, you have removed the textureless regions where the MAD between center pixel and other pixels in a patch is less than 0.02. Do the reflection refraction regions and textureless regions significantly influence the final performance or can they make convergence harder ? Thank you for your attention! Yours sincerely, Jinglei
Hi Dr. Shin, Thank you very much for your excellent works, it really helps me a lot. But I got a little confused when I tested HCI light field scene (boxes, cotton, dino, sideboard) with offered model (pretrained model 9x9 and 5x5), I found that the performances of estimates (MSE and Bad pixel ratio) is different from that published on HCI benchmark website, I guess maybe that the models you offered is different from that you used for HCI benchmark test? Do you have any idea about the difference? Thank you for your attention! Yours sincerely, Jinglei