getomni-ai / zerox

PDF to Markdown with vision models
https://getomni.ai/ocr-demo
MIT License
6.58k stars 358 forks source link

local llm #64

Open zhanpengjie opened 1 month ago

zhanpengjie commented 1 month ago

how to set base_url and model in python sdk?

pradhyumna85 commented 1 month ago

@zhanpengjie which local model you are trying to use? Does that have vision capability?

torrischen commented 1 month ago

I also have problem with using llama3.2-90B-Vision with vllm. The error said environment variable missing?

pradhyumna85 commented 4 weeks ago

@torrischen refer the model and api_base params here and pass it accordingly in zerox: https://docs.litellm.ai/docs/providers/vllm

Also refer #65

torrischen commented 4 weeks ago

@torrischen refer the model and api_base params here and pass it accordingly in zerox: https://docs.litellm.ai/docs/providers/vllm

Also refer #65

Thanks. That’s helpful