-
I'm testing image processing using ollama model llava, seems that it can't handle image processing right now. Is there any plans to implement image processing besides gpt-4-vison-preview? also when ad…
-
### Feature request
As per #28981, LLaVA is planned to receive `torch.compile` support. Seeing to the fact that LLaVA is composed of a vision tower and a LLM, both of which can be separately compiled…
-
### Motivation
我们想实现 turbomind 的离线推理模型 [llava-interleave-qwen-7b-hf](https://huggingface.co/llava-hf/llava-interleave-qwen-7b-hf), 请问有什么可以参考的案例吗?包括模型参数配置、模型转换/加载过程以及模型推理实现中的注意问题等。
### Related reso…
-
By slightly augmenting the code I was trying to embed two images into the prompt in the hope that the model would be able to make comparisons between them, but so far it looks like it always just sees…
-
Support finetuning LLaVA 1.6
-
# Trending repositories for C#
1. [**ExOK / Celeste64**](https://github.com/ExOK/Celeste64)
__A game made by the Celeste developers in a week(ish, closer to 2)__
170 star…
-
how to choose num_frames when finetuning?
When training say llava-next-video, is there a general rule of thumb of how to choose the num_frames? should it be dependent strictly on how many frames t…
-
楼主有遇到过类似的情况吗?
{'loss': 0.0, 'learning_rate': 0.001435114503816794, 'epoch': 0.02}
2%|██…
-
I pretrain with script
```
torchrun --nproc_per_node="${NUM_GPUS}" --nnodes="${NNODES}" \
"./llava/train/train_mem.py" \
--model_name_or_path ${LLM_VERSION} \
--version ${PROMPT_VERSI…
-
When I executed "bash example_scripts/example_image.sh". I got the following error:
Traceback (most recent call last):
File "/root/paddlejob/workspace/env_run/lmms-finetune/train.py", line 202, …