-
hi, can I knew the exact syntax, mine is still error:
```python
model = dict(
freeze_llm=True,
freeze_visual_encoder=True,
llm=dict(
attn_implementation='eager',
p…
-
I finetuned llava-phi3 model with lora, but when I try to convert the resulting weight, an error occurred.
This I my command:
xtuner convert pth_to_hf ./my_configs/llava_phi3_mini_qlora_clip_vit_l…
-
llava-phi-3-mini uses the Phi-3-instruct chat template. I think is similar with current llava-1-5, but with Phi3 instruct template instead of llama 2.
format:
`\nQuestion \n`
stop word is
for…
-
How can I convert from Llama LLM or official LLaVA format or HuggingFace LLaVA format to gguf format?
-
It would be nice if it could be possible to pull multiple models in one go in Ollama.
Today, I tried to run
```
ollama pull llava-phi3 llava-llama3 llama3-gradient phi3 moondream codeqwen
```…
-
Hi,
I noticed that `llava_instruct_150k_zh.jsonl` is used in the [config](https://github.com/InternLM/xtuner/blob/main/xtuner/configs/llava/phi3_mini_4k_instruct_clip_vit_large_p14_336/finetune/lla…
-
i has use https://github.com/hhaAndroid/xtuner/tree/refactor_llava train llava_1.6_phi3_8B model .
but ,it cannot convert to office llava model use convert_xtuner_weights_to_llava.py
can …
-
Can it be developed for local use? ollamam
-
I followed the instruction and got to `./iter_39620_hf` and `./iter_39620_llava`. I tried to convert them to gguf using the instrution [here ](https://github.com/ggerganov/llama.cpp/blob/master/exampl…
-
I use GPT-4o is running ok.
But when I changed to the local model, I used some error message.
EXCEPTION: 'function' object has no attribute 'name'
![image](https://github.com/onuratakan/gpt-compute…