-
Thanks for your great work!
1. have you tested _instructblip-flant5_ based on _CHEF_? For the same task, why the result of flant5 is quite different from vicuna? For example, with "SRC/config/ChEF/sc…
-
Firstly, thank you for your contributions to the multi-modal large language model (MLLM) research with MiniGPT-5. I'm experiencing an issue while testing the model's image comprehension capabilities.
…
-
### Describe the issue
Issue: As shon in this [issue](https://github.com/haotian-liu/LLaVA/issues/62), the training loss in coonvergence should be lower than 2 for `llava-vicuna-chat-hf-pretrain`. Ho…
-
1. There are error in "The text-only loss corresponds to training only on training only RefinedWeb", double " training only "
2. which dataset is used when "text-only loss, w/o RefinedWeb"
3. Why…
-
Sorry to bother you in your busy time and i am hurry to cary out alpha-clip with LLaVA-7b-clip.
I followed the instructions in [here](https://github.com/SunzeY/AlphaCLIP/issues/11#issuecomment-186264…
-
I used Lora to fine tune my own dataset, but the model only replied to the content I had trained on, and I didn't know any other common sense content but Bunny-v1_0-2B-zh is ok
Do you have any train…
-
Can you provide an example of how to use `accelerate` with the [Hugging Face trainer](https://huggingface.co/transformers/master/main_classes/trainer.html#id1)?
-
### feature
Hi Haotian,
A great pleasure to ask. Congratulations to this nice and solid work.
We are a team from NTU S-Lab, working on image/video quality assessment and recently propose a benc…
teowu updated
8 months ago
-
My running command is `python SNARE_probing.py --device cuda:0 --model_name blip2 --download --dataset COCO_Semantic_Structure`
Please help me with getting the right evaluation of COCO_Semantic_St…
-
### Describe the issue
Issue: pretrain.sh Can be trained successfully, but finetune_full_schedule.sh, process memory usage exceeded on V100, Is there any way to solve this problem?
Command:
```…