Alpha-VLLM / LLaMA2-Accessory

An Open-source Toolkit for LLM Development
https://llama2-accessory.readthedocs.io/
Other
2.72k stars 176 forks source link

Does SPHINX-MLLM support batch inference for image caption #154

Closed trouble-maker007 closed 6 months ago

trouble-maker007 commented 9 months ago

How to perform batch inference

ChrisLiu6 commented 9 months ago

#149

Note that SPHINX inherits from accessory.model.meta.MetaModel, so methods like generate are still usable