Closed itsPreto closed 3 months ago
Hi @itsPreto 👋🏻 Thanks for your interest in Maestro. Going local is what we want to do!
I used what I'm familiar with which is Llama.cpp server with ShareGPT4V-7B. But it should work with any other backend/model.
@SkalskiP you just need to define your custom payload with the parameters you want and then call the prompt_image_local
function with this payload and the localhost url.
You can see this in the examples/image.py
:
# Custom payload function for local server
def custom_payload_func(image_base64, prompt, system_prompt):
return {
"prompt": f"{system_prompt}. USER:[img-12]{prompt}\nASSISTANT:",
"image_data": [{"data": image_base64, "id": 12}],
"n_predict": 256,
"top_p": 0.5,
"temp": 0.2
}
# Convert image to base64 and send request to local server
response = prompt_image_local(marked_image, "Find the crowbar", "http://localhost:8080/completion", custom_payload_func)
print(response)
Adding local llm support