NielsRogge / Transformers-Tutorials

This repository contains demos I made with the Transformers library by HuggingFace.
MIT License
9.58k stars 1.46k forks source link

Batch Inference on LayoutLMv2 Visual Question Answering #256

Open arunpurohit3799 opened 1 year ago

arunpurohit3799 commented 1 year ago

@NielsRogge Is Batch inferencing possible in the LayoutLMv2 VQA task?

Currently, I have observed that on Colab GPU, an inference on a single question takes around 0.2-0.3 seconds. In the below step:

encoding = processor(image, question, return_tensors="pt")

I tried giving a list of questions and images, I was also able to get the answers in a list but the total time was more

NielsRogge commented 1 year ago

Yes, batch inference is supported.