jhc13 / taggui

Tag manager and captioner for image datasets
GNU General Public License v3.0
495 stars 26 forks source link

Server #228

Open geroldmeisinger opened 1 week ago

geroldmeisinger commented 1 week ago

This pull request adds a taggui/run_server.py which starts a FastAPI server on port 11435 and provides a command-line and HTTP interface to make requests to the server. The command-line and API structure follows Ollama.

Ollama keeps a model loaded and serves it via a HTTP server. Default settings are loaded from modelfiles. Requests are made via /api/generate which also accepts optional settings as json. Settings are stateful and any change is reflected in subsequent calls. Ollama also implements some vision models (like Llava, Moondream...) but relies entirely on Llama.cpp as the model loader backend, which is very flexible and fast, but also why CogVLM2 is not supported yet.

This implementation only provides the bare minimum serve, run and /api/generate. I duplicated CaptioningThread into CaptioningCore, stripped everything Qt, while keeping the code as unchanged as possible. Because Ollama expects an array of base64 encoded image I also stripped any reference to img_path and img_tags (i.e. replace_template_variable). (To be precise, Ollama expects an array of base64 encoded images of which only the first one is used for most vision models. I duplicated this behaviour.)

$ python taggui/run_server.py serve
INFO:     Started server process [83587]
INFO:     Waiting for application startup.
INFO:     Application startup complete.
INFO:     Uvicorn running on http://127.0.0.1:11435 (Press CTRL+C to quit)
$ python taggui/run_server.py run THUDM/cogvlm2-llama3-chat-19B-int4
Loading THUDM/cogvlm2-llama3-chat-19B-int4...
Send a message path/to/image.png (/? for help)
>>> describe this image images/icon.png
The image depicts a stylized illustration of a landscape scene with two mountains in the foreground  a sun in the sky  and a cloud to the right of the sun. The mountains are rendered in shades of blue  suggesting they are made of stone or are covered in vegetation. The sun is a bright yellow circle  indicating it is shining. The cloud is white and fluffy  suggesting it is a cumulus cloud. The sky is a light blue  indicating a clear day. In the bottom right corner
$ curl http://localhost:11435/api/generate -d "{ \"model\": \"THUDM/cogvlm2-llama3-chat-19B-int4\", \"stream\": false, \"prompt\": \"describe this image\", \"images\": [\"$(base64 -w 0 ./images/icon.png)\"]}"
{"type":"generate","response":"The image depicts a stylized illustration of a landscape scene with two mountains in the foreground  a sun in the sky  and a cloud to the right of the sun. The mountains are rendered in shades of blue  suggesting they are made of stone or are covered in vegetation. The sun is a bright yellow circle  indicating it is shining. The cloud is white and fluffy  suggesting it is a cumulus cloud. The sky is a light blue  indicating a clear day. In the bottom right corner"}

(If you get an error Argument list too long it's probably because your image is too big and base64 expansion exceeds the character length in the terminal.)

The idea is to extract the model loader part from the rest and provide a minimal interface. Requests can be made entirely via HTTP which allows more flexibility for UI application and automatization while the server keeps the model loaded independent of any UI application as long as the server is running.