Amshaker / SwiftFormer

[ICCV'23] Official repository of paper SwiftFormer: Efficient Additive Attention for Transformer-based Real-time Mobile Vision Applications
247 stars 25 forks source link

Confirm deprendencies for latency performance #13

Closed escorciav closed 9 months ago

escorciav commented 9 months ago

Hi,

The results on mobile are quite appealing.

  1. Could you kindly confirm the dependencies?
  2. Have you noticed performance improvement/degradation with a more recent computational stack?
    • Pytorch 1.11.0 seems a bit old. Did you use the stable version?
    • Actually, I'm more curious if you try torch.compile (dunno if it plays nice with CoreML).
    • What about new hardware?

FYI just create a fork to port the model onto Qualcomm QNN/SNPE via onnx. Did anyone do that before?

Cheers, Victor

escorciav commented 9 months ago

Relevant issue #3

escorciav commented 9 months ago

BTW, the requirements.txt does not mentioned einops. I fixed it via pip install einops => got einops-0.7.0

Didn't check if einops was mentioned in README :)

Amshaker commented 9 months ago

Hi @escorciav ,

Thanks for your interest in our work!

Best regards, Abdelrahman.

escorciav commented 9 months ago

Thanks a lot for the detailed & open reply. Very pleased!

Heads up: PyTorch Mobile might get deprecated. The new public attempt is called ExecuTorch