-
I do like the simplicity of this project's bindings to LLama.cpp.
Are there plans to add multimodal model support - like llava and phi-3 vision. I can test the bindings for these.
Thanks,
Ash
AshD updated
3 months ago
-
The following error occurred while running the script finetune_moe.sh:
The model has moe layers, but None of the param groups are marked as MoE. Create a param group with 'moe' key set to True before…
-
See title.
A q4 version would be great as well.
https://huggingface.co/xtuner/llava-phi-3-mini-hf
-
I found some VLMs are too sensitive to prompt. For example, when I use **mlx-community/llava-1.5-7b-4bit**:
the image is:
![image](https://github.com/Blaizzy/mlx-vlm/assets/72635723/1ab52f9b-085a-47…
cmgzy updated
3 months ago
-
### Description
white background with white font, which makes it difficult to read the tables
### (Optional:) Please add any files, screenshots, or other information here.
_No response_
### (Requi…
-
微调时,已经生成work dirs。但是最后报错:
raise FileNotFoundError(f"can't find {glob_pattern} files in directory '{checkpoint_dir}'")
FileNotFoundError: can't find *_optim_states.pt files in directory '/root/au…
-
Hey there, this project rocks! I saw the multimodal version of Phi-3 on Hugging Face. Does this project currently support it now?
https://huggingface.co/xtuner/llava-phi-3-mini-gguf
-
基于Xtuner训练LLaVA-Phi3
使用Xtuner评估MMBench-Dev-EN:0.7096
使用VLMEvalKit评估MMBench-Dev-EN:0.549828
-
Here is the model gguf link: https://huggingface.co/xtuner/llava-phi-3-mini-gguf
Here is the model hf link: https://huggingface.co/xtuner/llava-phi-3-mini-hf
I have been trying to add it manually b…
-
Hello after training Qlora I got produce checkpoint under
```
ll output/lora_vision_test/
adapter_config.json
adapter_model.safetensors
checkpoint-178/
config.json
non_lora_state_dict.bin
…