microsoft / unilm

Large-scale Self-supervised Pre-training Across Tasks, Languages, and Modalities
https://aka.ms/GeneralAI
MIT License
19.62k stars 2.5k forks source link

【kosmos-2】The code for GRIT construction #1279

Open ZJUTSong opened 1 year ago

ZJUTSong commented 1 year ago

Describe Model I am using kosmos-2: Will you updata the code of GRIT construction process? I'd like to finetune kosmos-2 in App UI scene, but the detail of GRIT construction is not clear enough for me. For example, the steps of "get noun chunks and region from detector" and "input image and noun chunks into glip to obtain bboxes" seems same? Thaks for your great work!

donglixp commented 1 year ago

image

ZJUTSong commented 1 year ago

image

Oh,sorry! I made a mistake. Another question: The process of generate grit is strict? In a specific scene, GLIP might can not recognize all objects. In this case, is it possible to generate object bbox、captions and nuon-chunks manually for fientuning?

donglixp commented 1 year ago

Yes, manual annotations would be quite helpful.

davidluciolu commented 9 months ago

Hi! I am also curious about the construction of the GRIT dataset. It is mentioned in the paper that

We eliminate certain abstract noun phrases that are challenging to recognize in the image, such as “time”, “love”, and “freedom”, to reduce potential noise.

So, the abstract noun phrases are eliminated manually or using spacy? Many thanks!