Open aashishpokharel opened 5 months ago
Hi,
Batched inference is supported by default in the Transformers library as shown here. Just pass multiple images to the image processor, and multiple target sizes to the post_process_object_detection
method.
I have a pdf and have converted it to the images. Now I want to pass it through the TETR for Table Structure Recognition. Is there a way to run Inference with batch size > 1 for TSR? Can it be done using the inference.py or I need to run in the eval mode?