jpWang / LiLT

Official PyTorch implementation of LiLT: A Simple yet Effective Language-Independent Layout Transformer for Structured Document Understanding (ACL 2022)
MIT License
335 stars 40 forks source link

How to decrease inference time of LiLT? #42

Open piegu opened 1 year ago

piegu commented 1 year ago

Hi,

I'm using Hugging Face libraries in order to run LiLT. How can I decrease inference time? Which code to use?

I've already try BetterTransformer (Optimum) and ONNX but none of them accepts LiLTmodel.

Thank you.

Note: I asked this question here, too: https://github.com/NielsRogge/Transformers-Tutorials/issues/284