Tencent / TurboTransformers

a fast and user-friendly runtime for transformer inference (Bert, Albert, GPT2, Decoders, etc) on CPU and GPU.
Other
1.47k stars 197 forks source link

Can turboTransformers be used for the ViT architecutre? #243

Open zxDeepDiver opened 3 years ago

feifeibear commented 3 years ago

Sorry, I am not familiar with the ViT arch. It has been used in both Encoder and Decoder Transformer architecture, which mainly focused on NLP tasks. If you develop your ViT DNN using huggingface, it is not hard to accelerate, at least part of, your code using Turbo.