-
作者你好,我想请问使用yolo-world对自己的数据集进行微调并保证开放词汇的效果,应该怎么做,使用config/fine_tuning还是config/pretrain里的config文件呢?另外我观察到可能还需要使用自己的数据集与GQA数据集进行混合之后才能保留zero-shot能力的基础之上对自己的数据集达到更好的检测效果,如果要保留zero-shot能力,我应该用config/pretr…
-
Dear scholar, is this project have the code about "scene graph generation for GQA images" or anything else related to another paper "Relation Transformer Network"? If not, How long may I need wait to …
-
Great work! I reviewed it. I have a question—does 'avg.len' in the CHAIR experiment of the paper refer to the average length of tokens?
-
### 📚 The doc issue
feature里有提到:[2024/04] TurboMind latest upgrade boosts GQA, rocketing the [internlm2-20b](https://huggingface.co/internlm/internlm2-20b) model inference to 16+ RPS, about 1.8x fa…
-
No GQA implementation is found, so the model is not capable to scale to 70B for composerLLAMA.
Maybe we need design GQA and introduce head_z for wq and head_z_kv for wk and wv?
-
我尝试给yolo_world_v2_l_vlpan_bn_2e-4_80e_8gpus_mask-refine_finetune_coco.py中直接添加
mg_train_dataset = dict(type='YOLOv5MixedGroundingDataset',
data_root='data/mixed_grounding/',
…
-
Take a look at the history of commits related to implementing `n_gqa` and undo/remove everything that is related to it
Checklist
- [X] `api/src/serge/utils/migrate.py` ✅ Commit [`ee41b29`](https:/…
-
I try to reproduce the results under base recipe. I basically get the results in the paper on VQAv2, GQA, ScienceQA and POPE. But there is almost 1% gap on TextVQA, MMMU, and MM-Vet, and the gap on MM…
-
Hello, I am trying to add support for models with GQA eg. Tiny llama
The indicator for grouped query attention is when num_key_value_heads < num_attention_heads in config.json file
For TinyLlama m…
-
Hi,
Is it possible to provide the details on how the first version was evaluated on benchmarks such as GQA or AOK-VQA in Table 6 of the paper?
Thanks