autodistill / autodistill-llava

LLaVA base model for use with Autodistill.
https://docs.autodistill.com
Apache License 2.0
6 stars 2 forks source link

batch predictions #6

Open shashi-netra opened 4 months ago

shashi-netra commented 4 months ago

The autodistill sample takes 1 image at a time, is there a way to predict on a batch to maximize utilization on GPU.

Samuel5106 commented 4 months ago

@shashi-netra can you elaborate more about the issue so that I can help you

shashi-netra commented 4 months ago

Please refer to code samplehere

def run_llava(video_file):
    frame_images = read_video(video_file)
    preds = llava_model(frame_images)
    return preds

Instead of sending 1 frame at a time, hopefully I can send a batch to get batched predictions.