haotian-liu / LLaVA

[NeurIPS'23 Oral] Visual Instruction Tuning (LLaVA) built towards GPT-4V level capabilities and beyond.
https://llava.hliu.cc
Apache License 2.0
19.54k stars 2.15k forks source link

LLaVA Context-length #1562

Open oroojlooy opened 3 months ago

oroojlooy commented 3 months ago

Question

Is there any table/page with the context-len of each model? For sure one can load all the models one by one and get the context_len via

tokenizer, model, image_processor, context_len = load_pretrained_model(...)

But, this is very time consuming.