-
¡Error! Error interno del servidor 500: el objeto de tipo bytes no es serializable en JSON
-
### System Info / 系統信息
{
"version": 1,
"context_length": 32000,
"model_name": "InternVL2-Llama3-76B-AWQ",
"model_lang": [
"en",
"zh"
],
"model_ability"…
-
### What is the issue?
When I run the **70b-instruct-q4_1** version of Llama3.1 ollama gives a buggy reply:
My sample request:
> ➜ ollama-tests curl http://localhost:11434/api/chat -d '{
…
-
Hi, and first thank you for the superb plugin. It's just awesome!
Could you give please a little bit more documentation about the local llm configuration?
Specific, I mean what possible values there…
-
ERROR: ValueError: Target modules llm\..*layers\.\d+\.self_attn\.(q_proj|k_proj|v_proj|o_proj) not found in the base model. Please check the target modules and try again.
thrown at the line PeftModel…
-
I use GPT-4o is running ok.
But when I changed to the local model, I used some error message.
EXCEPTION: 'function' object has no attribute 'name'
![image](https://github.com/onuratakan/gpt-compute…
-
- [x] MiniCPM-Llama3-V-2_5
- [x] Florence 2
- [x] Phi-3-vision
- [x] Bunny
- [x] Dolphi-vision-72b
- [x] Llava Next
- [ ] Idefics 3
- [ ] Llava Interleave
- [ ] Llava onevision
- [ ] internlm…
-
this model has a vision adapter: mmproj-model-f16.gguf
i never used any vision model in lmstudio, so I don´t know if that is a bug or related to this particular model.
because this model has strong …
-
Hi! Thank you again for this repo. The fine-tuning with llama3 works. However, when I try to merge with the obtained LoRA weights, using the `merge_lora_weights.py` script, and I compare the weights b…
-
I found something strange when loading the model. It seems that you have released the vision_tower during training, but when loading the vision_tower, you did not load the gradient-updating parameters…