Closed tusharraskar closed 4 months ago
from langchain_community.llms import VLLMOpenAI from PIL import Image
llm = VLLMOpenAI( openai_api_key="EMPTY", openai_api_base="http://150.0.2.236:8888/v1", model_name="microsoft/Phi-3-vision-128k-instruct", model_kwargs={"stop": ["."]}, ) image = Image.open("invoice_data_images/PO - 042 (REVISED )_page_1.png") prompt_1 = "Give me invoice date from given image" messages = { "prompt": prompt_1, "multi_modal_data": { "image": image }, }
print(llm.invoke(messages))
I want to run inference of a microsoft/Phi-3-vision-128k-instruct in langchain for image. I don't know how to integrate it with vllm.
The LangChain vLLM integration belongs to LangChain repo. Please ask on their repo instead.
Your current environment
from langchain_community.llms import VLLMOpenAI from PIL import Image
llm = VLLMOpenAI( openai_api_key="EMPTY", openai_api_base="http://150.0.2.236:8888/v1", model_name="microsoft/Phi-3-vision-128k-instruct", model_kwargs={"stop": ["."]}, ) image = Image.open("invoice_data_images/PO - 042 (REVISED )_page_1.png") prompt_1 = "Give me invoice date from given image" messages = { "prompt": prompt_1, "multi_modal_data": { "image": image }, }
print(llm.invoke(messages))
How would you like to use vllm
I want to run inference of a microsoft/Phi-3-vision-128k-instruct in langchain for image. I don't know how to integrate it with vllm.