Open thegodone opened 2 years ago
To whom wanted to reproduce you need to modify the repo and add smoothL1 aka HuberLoss to get the correct accuracy values. Also something clearly not in the paper (but in the code) we need to use weighted_decay to regularize the network weights.
Hi,
I ran your example using the CIFAR-10 32x32, with 16 init , kernel=3:
Another question: I notice that you concate also into det the aprox. I don't see this in the paper ? https://github.com/mxbastidasr/DAWN_WACV2020/blob/7a5876c3dc27d3515eaaa76b57b09b9c29a002b5/models/dawn.py#L310
Finally, in the paper you are using huber loss and in the code it's clearly l1 norm instead in your code https://github.com/mxbastidasr/DAWN_WACV2020/blob/7a5876c3dc27d3515eaaa76b57b09b9c29a002b5/models/dawn.py#L88 why to change this part ?
In the paper you only write c,x for the approximation eq 7 but in the code also consider d,HL and c,LL additional constraints ?
I also cannot reproduce the 86% accuracy in the table, instead I got 82%
thanks for helping me