huggingface / optimum-neuron

Easy, fast and very cheap training and inference on AWS Trainium and Inferentia chips.
Apache License 2.0
176 stars 53 forks source link

Support layoutlm models inference #512

Open JingyaHuang opened 3 months ago

JingyaHuang commented 3 months ago

Feature request

As requested in aws neuron caching, we need to support layout lm models export and inference first.

Tasks: feature-extraction / fill-mask / question-answering / text-classification / token-classification

Motivation

Requested by users in aws-neuron/optimum-neuron-cache.

Your contribution

I can pick it up when I have the bandwidth, but I would encourage members to contribute following this guide, it should be very straightforward.