-
Lora+base is working good
![image](https://github.com/mbzuai-oryx/LLaVA-pp/assets/15274284/ccec0900-7db0-4729-9ab4-3c5f68e0f304)
![image](https://github.com/mbzuai-oryx/LLaVA-pp/assets/15274284/7d12…
-
# URL
- https://arxiv.org/abs/2405.02246
# Affiliations
- Hugo Laurençon, N/A
- Léo Tronchon, N/A
- Matthieu Cord, N/A
- Victor Sanh, N/A
# Abstract
- The growing interest in vision-language…
-
Hi, I am getting this issue. I am running it on following system. I followed the instructions given in README.
Windows 11 Home
Intel Core i9
32GB RAM
( I tried with Anaconda and python3.11.9 and…
-
### Question
In the process of scaling up the input image size within `clip_encoder.py`, the following adjustments have been made:
```
def load_model(self, device_map=None):
if sel…
-
I finetune llava-one-vision using lmms-lab/llava-onevision-qwen2-7b-ov by config --lora_enable True --lora_r 128 --lora_alpha 256 --mm_projector_lr 2e-5 and have checkpoint saved, how can i using thi…
-
When I follow the link below, I get a request to download clip-vit-large-patch14-336. Should I download it separately offline? And how do I use it after downloading it offline?
https://github.com/…
-
Hi, thanks again for the amazing work here! When I try to fine tune the model with our sample data, I was able to initialize some parts of the training but I got the following issue related to "cpu "i…
-
```
G:\OmniGen_v1>cd OmniGen
G:\OmniGen_v1\OmniGen>call venv\Scripts\activate.bat
A matching Triton is not available, some optimizations will not be enabled
Traceback (most recent call last):
…
-
# BLIP
* [paper](https://arxiv.org/abs/2201.12086)
* [code](https://github.com/salesforce/BLIP)
* [blog](https://blog.salesforceairesearch.com/blip-bootstrapping-language-image-pretraining/)
* i…
-
dear author, i do have some quetions about your demo. Something go wrong during reference.
Here is the LOG:
[nltk_data] Error loading punkt:
[nltk_data] Error loading averaged_perceptron_tagger:
…