-
Hello!
I am trying to install the server package with hipBLAS / ROCm support.
The install fails with a
`cc: error: unrecognized command-line option ‘-Wunreachable-code-break’; did you mean ‘-Wu…
-
I have been playing with most multimodal models based on LLaVA models and I can tell that mini Gemini (the 13B version) is one of the best if not the best for its size.
Keep on the good work and h…
-
### System info
GPU: A100
tensorrt 9.3.0.post12.dev1
tensorrt-llm 0.9.0
torch 2.2.2
### Reproduction
```
export MODEL_NAME="llava-1.5-7b-hf"
git clone https://huggingface.co/llava-hf/${MODEL…
-
I tried two gguf conversion on M2 ultra (metal) but no luck. I converted them myself and still the same error.
Here is the first model I tried:
https://huggingface.co/guinmoon/MobileVLM-1.7B-GGUF…
-
https://github.com/LLaVA-VL/LLaVA-NeXT/blob/inference/docs/LLaVA-NeXT.md
In this example, your code generate double "" in front of "user" for the
prompt_question variable.
Could you check if the…
y-rok updated
4 months ago
-
Any chance we could see a variant of each produced with the Llava 1.6 architecture? Thanks
-
Just recently added ci to llamafile, but would like the capability to test this setup locally, but it's not working as it appears that the binfmt_misc is missing.
### Bug report info
```plain te…
-
I'm seeing a weird behavior with vision models.
I am using the Default LM Studio Windows config, which is the only one I have been able to get vision models to work with.
I have tried 2 differ…
-
Hello! Thanks for sharing such a nice project.
I have set up environment following the instructions in ReadME.
When I run the inference example as the following ( i have copy the run_vila.py file fr…
-
I am attempting to run the `finetune_onevision.sh` script. I've gotten many things sorted out but I am stumped by the `--pretrain_mm_mlp_adapter` argument.
The default value as provided in the scr…