Alpha-VLLM / LLaMA2-Accessory

An Open-source Toolkit for LLM Development
https://llama2-accessory.readthedocs.io/
Other
2.61k stars 167 forks source link

Does SPHINX-MLLM support batch inference for image caption #154

Closed trouble-maker007 closed 2 months ago

trouble-maker007 commented 5 months ago

How to perform batch inference

ChrisLiu6 commented 5 months ago

#149

Note that SPHINX inherits from accessory.model.meta.MetaModel, so methods like generate are still usable