issues
search
mbzuai-oryx
/
LLaVA-pp
🔥🔥 LLaVA++: Extending LLaVA with Phi-3 and LLaMA-3 (LLaVA LLaMA-3, LLaVA Phi-3)
813
stars
61
forks
source link
issues
Newest
Newest
Most commented
Recently updated
Oldest
Least commented
Least recently updated
How can I continue training on the pre-trained llava-llama3 model instead of training llama3 directly with my own data?
#35
ganliqiang
opened
1 month ago
1
Phi 3 Mini 128K leads to Tokenization Mismatch
#34
ritwickchaudhry
opened
3 months ago
2
INtegrate gemma 9B with siglip 512 px patch
#33
Jayantverma2
opened
4 months ago
0
Fixed issue with merging pretrain with Phi-3
#32
alanrbtx
opened
4 months ago
0
ValueError: Trying to set a tensor of shape torch.Size([128257, 4096]) in "weight" (which has shape torch.Size([128256, 4096])), this look incorrect.
#31
basteran
opened
5 months ago
1
How much memory are needed for fine-tuning llava-llama3
#30
simplelifetime
opened
6 months ago
0
Finetuning with lora output never ends.
#29
gyupro
opened
6 months ago
5
Total Parameters: 0
#28
believewhat
closed
3 months ago
0
convert_to_hf
#27
Jayantverma2
opened
6 months ago
0
Saved weights during tuning with LoRA
#26
fangruizhu
opened
6 months ago
0
【Error when merging LoRA weights】
#25
Luo-Z13
opened
6 months ago
3
finetune error about model size
#24
Skylight-Lark
closed
6 months ago
5
inference error
#23
TuuSiwei
closed
6 months ago
1
'PreTrainedTokenizerFast' object has no attribute 'legacy'
#22
TuuSiwei
closed
6 months ago
3
could you please support deepseek v2?
#21
thesby
opened
6 months ago
0
what is s2 in vision load tower and how it effects training
#20
Jayantverma2
opened
6 months ago
1
Training Issue
#19
DevonPeroutky
opened
6 months ago
1
Can use Lora+base model. but for merging Lora+base is error
#18
hellangleZ
opened
6 months ago
4
Tokenization mismatch in Phi-3 when finetune process
#17
hellangleZ
closed
6 months ago
5
Update README.md
#16
jbn
opened
6 months ago
0
Llava 1.6?
#15
ElliottDyson
opened
6 months ago
4
Tokenization mismatch in LLaVA-LLaMA-3
#14
Luo-Z13
closed
6 months ago
2
Exactly same as our study! Interesting!
#13
mu-cai
closed
6 months ago
2
Installation Issue
#12
orrzohar
opened
6 months ago
2
zero3.json is missing
#11
matbee-eth
closed
6 months ago
1
Can't load the HF model
#10
vfragoso
closed
6 months ago
1
S2 finetuning
#9
xmu-xiaoma666
closed
6 months ago
1
Train with llava-llama3
#8
hellangleZ
closed
6 months ago
9
Using phi-3 and LLava but some fields of phi3 network not support
#7
hellangleZ
closed
6 months ago
21
inference score with ScienceQA
#6
xmu-xiaoma666
closed
6 months ago
0
what is the learning rate then finetuning all LLM parameters during SFT stage?
#5
xmu-xiaoma666
closed
6 months ago
2
app
#4
chandlergis
closed
6 months ago
1
Vídeos support?
#3
RaulKite
closed
6 months ago
1
Clarification of ScienceQA score
#2
Isaachhh
closed
6 months ago
2
inference examples
#1
SkalskiP
opened
7 months ago
12