-
TRL SFTTrainer supports LLaVA (Large Language and Vision Assistant) as described in the following link [Vision Language Models Explained](https://huggingface.co/blog/vlms)
Is there any plan to rele…
-
Hi,
I've got next error during fine-tuning:
Traceback (most recent call last):
File "/home/kiosk1/.cache/pypoetry/virtualenvs/phi3-ivOQmoER-py3.9/lib/python3.9/site-packages/peft/peft_model.p…
-
### What is the issue?
C:\Users\18164>ollama run qwen:0.5b
pulling manifest
Error: pull model manifest: Get "https://ollama.com/token?nonce=pa9U-g8eXWKfTiK3NN_FdQ&scope=repository%!A(MISSING)librar…
-
### What happened?
I'm encountering an error while trying to run a model in LM Studio. Below are the details of the error:
{
"title": "Failed to load model",
"cause": "llama.cpp error: 'erro…
-
**Continued from: https://github.com/Blaizzy/fastmlx/issues/6**
---
When I tried this at the command line: "python -m mlx_vlm.chat_ui --model mlx-community/llava-1.5-7b-4bit", I get the same cha…
-
I'm wondering what causes this error?
Do I have to set --version phi3 during pre-training stage? I use --version plain in pre-train stage and --version phi3 in fine-tune stage. Is this the correct s…
-
### 📚 The doc issue
![issue](https://github.com/InternLM/lmdeploy/assets/120365110/e96b9d5f-9d1f-4e77-a0e7-c31c7e5c70c3)
AssertionError: 'internlm2_5-7b-chat’is not supported. The supported models…
-
### System Info
- `transformers` version: 4.38.2
- Platform: Linux-6.1.58+-x86_64-with-glibc2.35
- Python version: 3.10.12
- Huggingface_hub version: 0.23.0
- Safetensors version: 0.4.3
- Accele…
-
Hi, I am trying to Finetune a VLM, Phi-3-vision specifically. Looking at the example with LLaVA I created the following collator for it:
```
class VLMDataCollator:
def __init__(self, processo…
-
Hi, I'm trying to perform SFT training with Phi-3-vision, I followed the example with llava here https://github.com/huggingface/trl/blob/main/examples/scripts/vsft_llava.py. That however didn't work o…