Closed PlumedSerpent closed 4 years ago
May I ask which task and dataset are you working on and what results did you get for both loss functions?
I ported in your function and get different results compared with mine, your code will have slightly smaller values on pixels where diff_abs >= theta. So there are probably some minor errors in your loss[idx_bigger] part.
For reference, here is my toy input and results:
Prediction: tensor([[1.0000, 0.5000, 0.5000], [0.5000, 0.2500, 0.5000], [0.5000, 0.8000, 0.4000]])
Label: tensor([[0.9800, 0.2500, 0.2500], [0.2500, 1.0000, 0.2500], [0.2500, 0.2500, 0.7500]])
Results: tensor([[0.1740, 1.0378, 1.0378], [1.0378, 8.1260, 1.0378], [1.0378, 4.2064, 3.0384]])
Yours: tensor([[0.1740, 1.0378, 1.0378], [1.0378, 7.8099, 1.0378], [1.0378, 3.9899, 3.0384]])
Hope that helps.
I will release my inference code first during ICCV 2019 (hopefully), and then training code.
@protossw512 Thanks for your help. I am working on 300W dataset using HRNet backbone. The result NME is 3.37 with MSE and 3.34 with my adaptive wing loss implementation. And for your toy input, I compute in step by step by hand, it seems my result is correct with theta=0.5,y=0.25,y_hat=0.80, could you please check it again?
@PlumedSerpent I double checked my code, and you are right, I am on an old experimental branch, which has different settings compared with my paper. I would suggest you try different parameter settings for omega, epsilon, and theta.
@protossw512 From your ablation study I don't think different parameter settings would improve a lot. I'll try to re-implement other contributions in your paper to get a better result.
Hi, I am pretty interested in your excellent paper, and I implemented it in pytorch by myself. However it's wired that I didn't get an obvious improvement compared to MSE Loss. Can you help me find whether my implementation is correct? Or do you have any plan to upload your implementation recently?