-
I attempted a workaround, but the output from finetuning doesn't look quite right. Has anyone made a working fix for this issue?
-
I have conducted ```pip install git+https://github.com/EvolvingLMMs-Lab/lmms-eval.git``` and successfully installed lmms-eval=0.2.1.
But i got the following error:
```ValueError: Attempted to load …
-
### System Info
I am checking the source code of llava-next, particularly the file modeling_llava_next.py
### Who can help?
_No response_
### Information
- [X] The official example scripts
- …
-
-
### Question
So, I want to finetune this model with our own custom image dataset, which is mostly design images, and we want to give the ability to users to ask questions based on the image.
At this…
-
请问LLama Factory对于[llava next系列(llava-v1.6)](https://huggingface.co/collections/llava-hf/llava-next-65f75c4afac77fd37dbbe6cf)是否支持fine tuning
-
I tried to get sample video inference result by using the command below, but it didn't work.
bash scripts/video/demo/video_demo.sh lmms-lab/LLaVA-NeXT-Video-7B-DPO vicuna_v1 32 2 True data/sample.m…
MSY99 updated
1 month ago
-
Hi team,
I am currently using LLaVA-NeXT-Video-DPO (7B) and I want to confirm if it uses the pre-trained CLIP ViT-L/14. During training, do you freeze the visual encoder in the same way as in llava…
-
Dataset: https://www.kaggle.com/c/siim-isic-melanoma-classification/data
Infos:
* https://discuss.huggingface.co/t/how-to-use-hugging-face-to-fine-tune-ollamas-local-model/86134/3
* https://github.com…
-
I'm getting errors after trying to perform inference on an interleave model I fine-tuned using LoRA quantization
Here's the code:
```
import requests
from PIL import Image
import torch
from tr…