Open lukaszbinden opened 2 years ago
Hi Lukas. Thanks for your interest in our work!
If your method requires training then it would be fair to train FWCRF as well, but if your method is a heuristic then you can use the default values specified in the code, which would give decent results on PASCAL VOC.
Please do not hesitate to let me know if you need further information.
Thanks for getting back! If I understand correctly, in Table 1 you used an untrained Potts FWCRF (l2FW) directly on top of DeepLabv3+ w/ RN101 for VOC and already improved on the CNN baseline? So we will train FWCRF as well following your descriptions in section 5.3.
Hi Lukas. That is correct, using the Potts model already improves over DeepLabv3 and DeepLabv3+. Please note though that the parameters of the Potts model were set according to Krähenbühl and Koltun (α = 80, β = 13, γ = 3, see Appendix E.1), but β should be scaled depending on how the inputs are normalized. For example, in DeepLabv3+ the images are not in the range [0, 255] but [-1, 1] (which is to ensure compatibility with the TensorFlow pre-trained weights released by the original authors), so β should be scaled as: (13/255)*2 ~= 0.1
(in my experiments I forgot the 2 factor so I used 0.05, which I think wouldn't affect the final results too much).
And also the backbone of DeepLabv3 is ResNet101 but that of DeepLabv3+ is Xception65.
If you have any issues in training FWCRF, I would be happy to help.
Hi, First of all, thanks for the impressive work.
We are experimenting with a novel post-processing method (aiming at CVPR) and would like to benchmark ours with Frank-Wolfe dense CRFs (FWCRF). To that end, we are not sure if we can use FWCRF directly as is (following chapter 5.2) for inference on Pascal VOC (as defined below, with adjusted alpha, beta, gamma according to section E.1) or whether we need to train it first on the training data of the respective dataset.
Thanks much in advance for getting back.
Best wishes, Lukas