-
¡Error! Error interno del servidor 500: el objeto de tipo bytes no es serializable en JSON
-
### What is the issue?
When I run the **70b-instruct-q4_1** version of Llama3.1 ollama gives a buggy reply:
My sample request:
> ➜ ollama-tests curl http://localhost:11434/api/chat -d '{
…
-
- [x] MiniCPM-Llama3-V-2_5
- [x] Florence 2
- [x] Phi-3-vision
- [x] Bunny
- [x] Dolphi-vision-72b
- [x] Llava Next
- [ ] Idefics 3
- [ ] Llava Interleave
- [ ] Llava onevision
- [ ] internlm…
-
I use GPT-4o is running ok.
But when I changed to the local model, I used some error message.
EXCEPTION: 'function' object has no attribute 'name'
![image](https://github.com/onuratakan/gpt-compute…
-
Hi, and first thank you for the superb plugin. It's just awesome!
Could you give please a little bit more documentation about the local llm configuration?
Specific, I mean what possible values there…
-
this model has a vision adapter: mmproj-model-f16.gguf
i never used any vision model in lmstudio, so I don´t know if that is a bug or related to this particular model.
because this model has strong …
-
Hi! Thank you again for this repo. The fine-tuning with llama3 works. However, when I try to merge with the obtained LoRA weights, using the `merge_lora_weights.py` script, and I compare the weights b…
-
**Describe the bug**
What the bug is, and how to reproduce, better with screenshots(描述bug以及复现过程,最好有截图)
I encountered an OOM error when trying to DPO MiniCPM-LLaMA-v-2.5 with my own dataset and 4 r…
-
In the project: https://github.com/InternLM/xtuner/tree/main/xtuner/configs/llava/llama3_8b_instruct_clip_vit_large_p14_336, it gives an examples how to convert llava-llama3 model to hf format:
`
…
-
I found something strange when loading the model. It seems that you have released the vision_tower during training, but when loading the vision_tower, you did not load the gradient-updating parameters…