-
### What happened?
We can't use Vision with `ollama_chat`, but it's working with `ollama`.
config.yaml
```yaml
- model_name: 'llava:7b'
litellm_params:
model: 'ollama_chat/llava:7b…
-
## 问题-1
你好,当我运行脚本`llava_llama3_8b_instruct_qlora_clip_vit_large_p14_336_e1_gpu1_finetune.py`后,对保存后的模型进行格式转换,`.pth` --> `xtuner`格式,文件结构如下:
这个模型结构与开源的模型文件结构不同,这是为什么?
**xtuner/llava-llama-3-8b-v1_1…
-
from llava.model.builder import load_pretrained_model
from llava.mm_utils import get_model_name_from_path, process_images, tokenizer_image_token
from llava.constants import IMAGE_TOKEN_INDEX, DEFAUL…
-
Hi, Thanks for your wonderful work.
I am struggling using my lora tuned model.
I conducted following steps
1. finetuning with lora
- Undi95/Meta-Llama-3-8B-Instruct-hf model base
- llama3 …
-
是否会支持:llava 1.6+llama3 70B的模型呢?
-
是否会支持:llava 1.6+llama3 8B的模型呢?
-
Currently on 3 chat templates is present: https://github.com/TanvirOnGH/vscode-ollama-modelfile/blob/dev/snippets/modelfile.json#L37-L104.
## TODO Templates
- [x] ChatML (ccd461ac30c116110a7adda50…
-
i have 2 x a100 gpus,
i hava been training one task on gpu1,
and i want to train another tasks on gpu2 at the same time,
but i get error as followings:
```
CUDA_VISIBLE_DEVICES=1 \
xtuner t…
-
Hi, Dear author:
It seems the llava-next is really insightful exploreing work. Please kindly release the training and inference code asap, thank you very much.
-
Why moondream doesn't get me any results do i need to change some parameters? other models (such as llava-llama3) using the same parameters worked very well, thank you in advance.
![Untitled](http…