yuanming-hu / fc4

Code and resources for "FC4 : Fully Convolutional Color Constancy with Confidence-weighted Pooling" (CVPR 2017)
MIT License
190 stars 59 forks source link

Did you trained colorchecker_fold1and2.ckpt with SEPERATE_CONFIDENCE = False? #9

Closed yt605155624 closed 6 years ago

yt605155624 commented 6 years ago

In your code fcn.py,you said that if not SEPERATE_CONFIDENCE ,you will use way 2 confidence-weighted pooling

  1. Just output unnormalized R, G, B, and simply take the sum and normalize. Thus we are weighting using the length. which is different from your paper(I thought the way you stress in your paper is way1,if not ,please correct me) so ,forgive me ,I have two questions: 1.Did you trained colorchecker_fold1and2.ckpt with SEPERATE_CONFIDENCE = False?
  2. Did the way 2 means the average pooling (c=1) in your paper? By the way,you use three-fold validations to get the final score, does this mean that I can get fold 0's scores by training fold1,fold2, fold1's score ( trainng fold0,fold2),fold2(traing fold1,fold0) , and just calculate the arithmetic mean value of them ,or I have to calculate the geometrical mean to get the final scores? So many questions,hh,forgive me,and wishing for your early reply~!
yuanming-hu commented 6 years ago

Thanks for the questions. Quick answers for now: 1) Yes, but it doesn't mean average pooling since the R,G,B values are not normalized. A brief discussion is in the beginning of section 4.1; I will add some clarification. 2) There is a script for this and I will post it tonight.

yt605155624 commented 6 years ago

Thanks for you reply,I have tried your taining code and got some surprising results by just using way2! Why didn't you use way1,Cause I think this method may get a better result? Waiting for your code~! ╭(●`∀´●)╯╰(●’◡’●)╮

yuanming-hu commented 6 years ago

Actually, either way is confidence-weighted. You can disable it and use average pooling by setting WEIGHTED_POOLING=False in the config.

yuanming-hu commented 6 years ago

Hi, I think the answers are updated in FAQs. Could you take a look?

yt605155624 commented 6 years ago

Hello , I have checked your combine.py ,and find that it should be models = sys.argv[1:] in combine.py,or we will miss the final model

yt605155624 commented 6 years ago

I have checked your paper again and find what you said in 4.1,sorry for ignoring in the first time and thank you for your patience~

yuanming-hu commented 6 years ago

Thanks! the issue in combine.py is fixed now. (I used to use the last parameter to specify the dataset, now it is no longer necessary.) Let me know if you have more questions!

yt605155624 commented 6 years ago

I have tried the training code using way1 without changing the config.py which iter 6000,and find that the result was very bad as 188 images tested. Results: 25: 1.087, med: 2.480 tri: 2.620 avg: 3.113 75: 6.232 95: 7.546 Test time: 6.77586102486 per image: 0.0360418076211 compared with way2: 188 images tested. Results: 25: 0.367, med: 1.077 tri: 1.231 avg: 1.583 75: 3.638 95: 4.386 Test time: 6.36561894417 per image: 0.0338596701622 So I want to know if way1 needs more times of iters ,did you remember approximately how may iters it need ?

yuanming-hu commented 6 years ago

Interesting... I haven't used way 1 for a long time and there may be some bugs in the code given the error difference. When I switched to way2, the overall error on the color checker dataset is basically the same. Can you try to visualize the confidence values of way1 to see if there is reasonable?

I've got too many things to do this week and let me take a closer look this weekend.

yuanming-hu commented 6 years ago

the number of iterations needed by two ways should be the same.