-
> @nkasmanoff @mzbac Great works! Do you plan to support [Llava-1.6-hf](https://huggingface.co/llava-hf/llava-v1.6-mistral-7b-hf) too ? It does require this change from transformers: [hu…
-
Hi, thank you for your great work! Following your setup instruction, I ran the commands below.
```sh
conda env create -f cola.yml
cd ..
git lfs clone https://huggingface.co/OFA-Sys/ofa-large
pyt…
-
-
在 6 * A800 80GB 上微调 GLM-4V,数据用量7500+ 例单轮VQA
在Map时报错如下:
Map (num_proc=6): 0%| |…
-
### 是否已有关于该错误的issue或讨论? | Is there an existing issue / discussion for this?
- [X] 我已经搜索过已有的issues和讨论 | I have searched the existing issues / discussions
### 该问题是否在FAQ中有解答? | Is there an existing ans…
-
Hi,
I am trying to use vortex lattice methods for optimal control problems. I successfully used the AeroBuildup module for optimizing a trajectory, but optimizing the design with this module doesn'…
-
Hi, thank you for your wonderful work! I have a doubt regarding the rescaling of concept embeddings, I am not sure if I missed it, but I could not find it in the code. How exactly is the concept embed…
-
Great work! One quick question: in the paper you've reproduced the results from Llava. Additionally, for the Prismatic models experiments, you are fine-tuning the whole LM. I'm wondering did you try u…
-
### Feature request
Allowing passing past key values during the forward pass of more than one token similar to the text large language models.
### Motivation
According to the documentation [here](…
-
For the example in this page: https://github.com/mit-han-lab/llm-awq/tree/main/tinychat#usage
You can easily inference on images:
python vlm_demo_new.py \
--model-path VILA1.5-13b-AWQ \
…