-
When i trained llava-llama3 use your code, the log print tokenization mismatch as below.
how to fix it?
thanks!
WARNING: tokenization mismatch: 55 vs. 54. (ignored)
WARNING: tokenization m…
-
Hi I just saw in redis that there is a llava model based on llama-3, can be added it to the library? Thanks
Source:https://www.reddit.com/r/LocalLLaMA/comments/1ca8uxo/llavallama38b_is_released/
-
```
Traceback (most recent call last):
File "/root/miniconda3/lib/python3.10/site-packages/mmengine/runner/_flexible_runner.py", line 1271, in call_hook
getattr(hook, fn_name)(self, **kwargs)…
-
### Question
Thank you for your great work!
I am trying to fine-tune llava-v1.6-mistral-7b on the provided GQA dataset, using the script `finetune_task_lora.sh`. However, the loss dosen't decrea…
-
Hello everyone, thank you for the great job!
I am trying to further fine-tune the LLaVA architecture using your implementation with LLaMA 3 Instruct 8B. I can already fine-tune the Vicuna model usi…
-
Mixed models like llama3 + llava are capable of doing superior things, such as recognizing an image of a screenshot and reconstructing it in html-type programming for example if required.. iT would be…
-
### What is the issue?
NAME ID SIZE MODIFIED
glm-4-9b-chat:latest 5356a47a9286 6.3 GB 3 minutes ago
llama3:latest …
-
I am trying to use the model llava-v1.6-mistral-7b-hf in the text-generation-webui demo, but I am getting errors. The last few lines of the error message read like:
/usr/local/lib/python3.10/dist…
-
https://github.com/LLaVA-VL/LLaVA-NeXT/blob/inference/docs/LLaVA-NeXT.md
In this example, your code generate double "" in front of "user" for the
prompt_question variable.
Could you check if the…
y-rok updated
2 months ago
-
Currently the trained llava model can only be used by CLI (without the ability to use new images) or tested using benchmark tools.
How can we deploy it using API or WebUI as a more user-friendly inte…