NVlabs / A-ViT

Official PyTorch implementation of A-ViT: Adaptive Tokens for Efficient Vision Transformer (CVPR 2022)
Apache License 2.0
138 stars 12 forks source link

Cannot Reproduce Reported Accuracy #9

Open johnheo opened 1 year ago

johnheo commented 1 year ago

Hello,

I am unable to reproduce the validation results for A-ViT-Tiny. When training on four RTX A6000 GPUs with the exact same training script, it produces 68.17% Top-1 and 88.816% Top-5. It seems like other people in the thread have the same issue with similar results obtained. Could the author please address this issue?

Thank you!