Blaizzy / mlx-vlm

MLX-VLM is a package for running Vision LLMs locally on your Mac using MLX.
MIT License
233 stars 20 forks source link

[Feature Request] Only generate output when --verbose False #9

Closed s-smits closed 4 months ago

s-smits commented 5 months ago
import subprocess
import sys

command = [
    sys.executable, 
    '-m', 'mlx_vlm.generate',
    '--model', 'qnguyen3/nanoLLaVA',
    '--max-tokens', '100',
    '--temp', '0.0',
    '--image', "http://images.cocodataset.org/val2017/000000039769.jpg",
]

result = subprocess.run(command, capture_output=True, text=True)
caption = result.stdout
print(caption)

Currently: Gives input including template, output and speeds

Proposed: with --verbose False only generate the output in the terminal, which makes it easier to interpret the results for further use.

Blaizzy commented 5 months ago

Hey @s-smits

That makes a lot of sense.

I will add it to the next release I'm working on

Blaizzy commented 5 months ago

How about this?

image

Blaizzy commented 5 months ago

However, streaming is disabled, so you'll wait for the final answer.

Blaizzy commented 4 months ago

This is done ✅ on the latest release !

https://github.com/Blaizzy/mlx-vlm/pull/10