-
https://github.com/Meituan-AutoML/MobileVLM/issues/13 与其中的疑问一致;
1. encode_image的速度;
encode_image_with_clip: image encoded in 20666.23 ms by CLIP ( 143.52 ms per image patch) 这是mobileVLM 在骁龙gen1上的i…
-
python scripts/apply_delta.py --base-model-path video_chatgpt-7B.bin --target-model-path LLaVA-Lightning-7B-v1-1 --delta-path liuhaotian/LLaVA-Lightning-7B-delta-v1-1
why Tis issue?
OSError: I…
-
System:
```
> uname -m && cat /etc/*release
x86_64
DISTRIB_ID=Pop
DISTRIB_RELEASE=22.04
DISTRIB_CODENAME=jammy
DISTRIB_DESCRIPTION="Pop!_OS 22.04 LTS"
NAME="Pop!_OS"
VERSION="22.04 LTS"
ID…
-
Hello, I tried to access 'llama 2' and 'mistral' model to build a local open-source LLM chatbot. However, maybe I access your website too ofter during debugging, I met this error : 'ConnectionError: H…
-
Hello, I did the whole setup guide with setting up docker, cloning the repo and executing the install script.
(I'm using the Nvidia Jetson AGX Orin)
I want to adjust the dockerfile in jetson-contain…
-
### Question
your work is amazing and I want to use it in my project, I need to retrain the projector on a different clip,
however, when I ran the `pretrain.sh`, I encounted some problems `You are …
-
I evaluated LLaVA-1.5-7b on the MMVP dataset and found that its accuracy is 60.0%, which is significantly higher than the 24.7% reported in Table 3.
Upon comparing the evaluation code, I discovered t…
-
### Describe the issue
Issue: Multiple GPU inference is broken with LLaVA 1.6. Same command with model liuhaotian/llava-v1.5-13b works fine.
Command:
CUDA_VISIBLE_DEVICES=0,1 python -m llava.se…
-
Thanks for the latest updates and improvements!
I was looking into the different llava example notebooks and the [VILA example](https://github.com/mit-han-lab/llm-awq/blob/main/scripts/vila_example.s…
-
Hi,
I'm trying to run demo_trt_llm.
Followed demo_trt_llm/README.md exactly
MODEL_NAME='vila1.5-2.7b'
Command:
python $VILA_ROOT/demo_trt_llm/convert_checkpoint.py \
--model_dir models…