Hi! Thank you for your work in bringing new benchmarks and a unified approach to evaluation for the WSOL community.
In the CVPR2020 paper, some tables you provided seem to be inconsistent with those in https://docs.google.com/spreadsheets/d/1O4gu69FOOooPoTTtAEmFdfjs2K0EtFneYWQFk8rNqzw/edit#gid=0. For example, the result of Inceptionv3 with ACoL in the paper is 63.0, but the form indicated by the web link page is lower. Is there something wrong with the way I use it? Looking forward to your reply!
Hi! Thank you for your work in bringing new benchmarks and a unified approach to evaluation for the WSOL community. In the CVPR2020 paper, some tables you provided seem to be inconsistent with those in https://docs.google.com/spreadsheets/d/1O4gu69FOOooPoTTtAEmFdfjs2K0EtFneYWQFk8rNqzw/edit#gid=0. For example, the result of Inceptionv3 with ACoL in the paper is 63.0, but the form indicated by the web link page is lower. Is there something wrong with the way I use it? Looking forward to your reply!