issues
search
BAAI-DCAI
/
Bunny
A family of lightweight multimodal models.
Apache License 2.0
865
stars
65
forks
source link
issues
Newest
Newest
Most commented
Recently updated
Oldest
Least commented
Least recently updated
about images of pretrain data
#124
nicedoctor
opened
2 hours ago
1
Continuous Fine-tuning Bunny 1.1 4B
#123
ChenFicha
opened
17 hours ago
1
Bunny v1.1 Llama 3 8b GGUF support/release?
#122
SamuelSwartzberg
opened
2 days ago
1
Inference acceleration, can the trained model use some inference framework? The code comes from the llava architecture. Can it be integrated into inference frameworks such as sglang or lmdeploy similar to llava?
#121
zhangqingwu
opened
1 week ago
0
Multi-images in 1 prompt
#120
motcapbovit
opened
2 weeks ago
1
About pretrain data
#119
kid369
opened
3 weeks ago
4
Download the model to local inference, but still link to huggingface.co
#118
Kyousoso
closed
1 hour ago
3
Support batch inference
#117
LiQiiiii
closed
3 weeks ago
0
Support batch inference
#116
LiQiiiii
closed
3 weeks ago
0
Convert Bunny-v1.0-3B to GGUF
#115
q104769424
opened
4 weeks ago
5
Network Error due to High Traffic Error Message
#114
kyuewang17
closed
4 weeks ago
1
Support Batch Inference
#113
LiQiiiii
closed
4 weeks ago
0
Training the model throws an error after quantization
#112
dingtine
opened
1 month ago
1
tokenization mismatch when fine-tuning bunny-phi3
#111
simplelifetime
closed
1 month ago
7
S2-Wrapper Strategy Training Resulting in Tensor Shape Mismatch
#110
dingtine
closed
1 month ago
3
Pre_train 和 SFT
#109
mynamelxy
closed
1 hour ago
2
Question on the Role of Text Encoder in Bunny MLLM Architecture
#108
codefanw
closed
1 month ago
2
Encountered an error during model inference after using qwen2.
#107
dingtine
closed
1 month ago
2
Why do you modify the function `prepare_inputs_for_generation` of the LLM?
#106
linhaojia13
closed
1 month ago
2
KeyError: 'bunny-phi'
#105
jui0616
closed
1 hour ago
3
Smaller qwen2 model?
#104
R3xpook
opened
1 month ago
1
微调数据集制作疑问
#103
chenzhu005774
closed
1 month ago
1
微调报错
#102
chenzhu005774
closed
1 month ago
5
NotImplementedError: Cannot copy out of meta tensor; no data!
#101
A-Akhil
closed
1 hour ago
2
微调模型后启动web显示矩阵维度对不上
#100
htesd
closed
1 month ago
2
修改或融合视觉模块
#99
Why0912
closed
3 weeks ago
4
Zero3 error for pretrain
#98
zhww
opened
2 months ago
4
Model only responds with fine-tuned answers
#97
tamdan17
closed
1 hour ago
2
about the training strategy for Llama-3-8B
#96
Jancsi9981
closed
3 weeks ago
2
about training
#95
Tengfei000
closed
1 month ago
6
Support for Qwen2
#94
Gary2018X
opened
2 months ago
0
Batch inference
#93
mtsysin
closed
1 hour ago
3
convert raw data to training format
#92
acul3
closed
2 months ago
4
why use deepspeed zero2 for pretrain but use zero3 for finetune?
#91
double-fire-0
closed
2 months ago
2
about finetune train
#90
ZuyongWu
closed
1 month ago
2
question about s2
#89
zezeze97
closed
2 months ago
5
How to Evaluate a Fine-Tuned Model
#88
HuBocheng
closed
2 months ago
1
tokenization mismatch
#87
Wondersui
closed
3 months ago
1
please use torch.amp instead of apex directly.
#86
dragen1860
closed
3 months ago
1
Please add detailed steps on How to convert the Bunny Family of Models to GGUF?
#85
criminact
closed
2 months ago
8
How to modify `preprocess_bunny` for `qwen-1.5-1.8b-chat`
#84
linhaojia13
closed
1 hour ago
7
GPU adaptation
#83
chenzhu005774
closed
2 months ago
1
On the issue of Continuous Fine-tuning
#82
Gary2018X
closed
1 month ago
20
Release for Bunny-Llama3 LoRA Weights
#81
cjfcsjt
closed
3 weeks ago
2
convert to gguf for llama.cpp
#80
zhaohengxing
closed
2 months ago
5
interlaced information between images and text and multiple images
#79
zhangqingwu
opened
3 months ago
2
The accuracy of model test is poor
#78
aoji0606
closed
3 months ago
12
DINOv2 和 SigLIP 融合
#77
CanvaChen
closed
3 months ago
1
fine-tune
#76
zhangqingwu
closed
3 months ago
2
Tokenization mismatch
#75
swhoosh
closed
1 month ago
21
Next