issues
search
AILab-CVC
/
SEED
Official implementation of SEED-LLaMA (ICLR 2024).
https://ailab-cvc.github.io/seed
Other
576
stars
31
forks
source link
issues
Newest
Newest
Most commented
Recently updated
Oldest
Least commented
Least recently updated
Request for Instruction Tunning
#53
URRealHero
opened
2 months ago
0
预训练数据
#52
cxjtju
opened
3 months ago
0
How to run seed-2 image reconstruction inference?
#51
zhangqingwu
opened
3 months ago
0
Inquiry Regarding the Evaluation Details of SEED-LLaMA
#50
nth2000
opened
3 months ago
0
hydra.errors.InstantiationException: Error locating target './models.seed_llama_tokenizer.SeedLlamaTokenizer.from_pretrained', set env var HYDRA_FULL_ERROR=1 to see chained exception
#49
cxjtju
closed
3 months ago
1
instruction tuning data example
#48
cxjtju
opened
3 months ago
0
中文效果
#47
cxjtju
opened
3 months ago
0
Will consider try with new Magvit2 image tokenizer?
#46
lucasjinreal
opened
5 months ago
0
maximum number of tokens
#45
Caixy1113
opened
5 months ago
0
Clear Training Instructions
#44
eslambakr
opened
5 months ago
0
Evaluating the quality of codebooks using image-text/text-image retrieval as proxy.
#43
SxJyJay
opened
5 months ago
0
Where are the core code lines that transform image to discrete token?
#42
guotong1988
closed
6 months ago
1
How to train the model on my own dataset?
#41
zhi-xuan-chen
opened
6 months ago
0
Question about T2I recall1 of COCO
#40
leexinhao
opened
6 months ago
0
Instruction Tuning hyperparameters for SEED-LLaMA
#39
dhdbsrlw
opened
7 months ago
0
Does model has Chinese OCR ability?
#38
luohao123
opened
7 months ago
0
sh train_scripts/causal_qformer.sh is not working!
#37
leedaehan-kev
opened
7 months ago
1
How long does it take to train the SEED tokenizer?
#36
ys-zong
opened
7 months ago
0
Codebook Training Epochs
#35
Revliter
opened
7 months ago
1
Whether can SEED process more than 2 images once?
#34
qwqwq1445
opened
7 months ago
0
Reproduce SEED LLaMA evaluation
#33
hyomin14
opened
7 months ago
0
Missing Multimodel Pretraining step
#32
shubhamgarg21
opened
8 months ago
1
Fixes for issues in generating pre-training data by converting images into discrete tokens
#31
shubhamgarg21
opened
8 months ago
0
train_scripts/causal_qformer.sh not working
#30
shubhamgarg21
opened
8 months ago
1
install.sh not working
#29
shubhamgarg21
closed
8 months ago
1
Difference in 'blocks' and 'blocks_for_image'
#28
zheedong
closed
2 months ago
2
Training Data of Tokenizer
#27
zheedong
opened
8 months ago
2
如何获取训练数据?
#26
APiaoG
opened
8 months ago
1
Train data
#25
APiaoG
opened
8 months ago
1
Pretrained LLM version
#24
hyomin14
closed
8 months ago
1
Stage I Contrastive Learning : What is 'final' causal embedding?
#23
zheedong
closed
8 months ago
1
Training code
#22
koda-11
opened
9 months ago
1
About add the quantized image tokens to pretrained language tokenizer.
#21
Jiushanhuadao
opened
9 months ago
1
Demo Not Working
#20
yzeng58
opened
10 months ago
1
Question on how task27 generates images
#19
JunZhan2000
closed
10 months ago
0
Questions about EVA-CLIP-G used in SEED.
#18
Haochen-Wang409
closed
8 months ago
2
How to force model to generate image?
#17
haochuan-li
opened
11 months ago
2
Hyperparameter for training SEED Tokenizer
#16
Cheolhyun-Mun
opened
11 months ago
1
Evaluation Results
#15
chancharikmitra
opened
11 months ago
0
Training Code
#14
ChangeNext
opened
11 months ago
1
What is frozen text/image encoder?
#13
zheedong
closed
9 months ago
3
Training code for SEED-LLaMA
#12
shubhamgarg21
opened
11 months ago
1
More explanation about pilot experiments
#11
zheedong
closed
11 months ago
1
Fail on Inference
#10
SihengLi99
closed
11 months ago
2
CIDEr reproduce hyperparameter
#9
zheedong
closed
11 months ago
2
About Gradio Version
#8
MajorDavidZhang
closed
11 months ago
1
What is the difference between SEED-2-1 and SEED-2?
#7
JunZhan2000
closed
11 months ago
1
新的seed-llama-v2-1的模型,请问会开源吗?
#6
yonghenglh6
closed
11 months ago
1
Training Code ?
#5
achen46
closed
1 year ago
1
Update README.md
#4
computerscienceiscool
opened
1 year ago
0
Next