-
加载模型后执行,第一次推理没问题,同样的参数第二次推理,就会报错
另外:咱们是否有官方技术交流群,可以讨论一下呢
-
Could you please provide the requirements.txt for llava?
Thanks!
icrto updated
3 months ago
-
### Describe the issue
Currenty, only inference with batch_size=1 is possible. If I undestood correctly, these things should be changed to make batch inference:
1. position_ids should be shifted, …
-
### Question
Thank you for your great work!
I am trying to fine-tune llava-v1.6-mistral-7b on the provided GQA dataset, using the script `finetune_task_lora.sh`. However, the loss dosen't decrea…
-
Hi,
I'm deeply inspired by your great work!
Could you please provide some information on the data used to evaluate the detailed captioning ability of the model (not the evaluation script, but th…
-
Is this going to be on the transformers library? Seems like it's going to be big.
-
Thanks for your great job. When will the training code open sourced?
-
https://llava-vl.github.io/blog/2024-01-30-llava-next/
Thanks for supporting llava1.6, but Ollama currently seems still unable to use the "Dynamic High Resolution" feature, which is important for ll…
-
### Feature request
We want to standardize the logic flow through Processor classes. Since processors can have different kwargs depending on the model and modality, we are adding a `TypedDict` fo…
-
### Feature request
This is a tracker issue for work on _interleaved_ in-and-out image-text generation.
There are now >= 5 open-source models that can do _interleaved_ image-text generation--and…