Open HaoKun-Li opened 3 years ago
Hi,
Thanks for asking. For this project, the Cuda implementation of the adder layer is still slow, so training on ImageNet takes several months. We did not include the final results in our paper as we only half-trained it.
That said, we manage to work on ImageNet in our following-up work. [ICML 2022] ShiftAddNAS: Hardware-Inspired Search for More Accurate and Efficient Neural Networks
In that work, we search for networks consisting of various multiplication-based or multiplication-free blocks. The results finally can go up to 83%.
We also have ongoing work that applied shift and add ideas to the Vision Transformers on ImageNet, please stay tuned.
Shift&Add on ViTs and ImageNet: https://arxiv.org/abs/2306.06446
Hi,is there any experimental results or code on ImageNet dataset available? Thanks!