Haiyang-W / GiT

[ECCV2024 Oral🔥] Official Implementation of "GiT: Towards Generalist Vision Transformer through Universal Language Interface"
https://arxiv.org/abs/2403.09394
Apache License 2.0
293 stars 12 forks source link

Details regarding few-shot and zero shot datasets #6

Closed NareshGuru77 closed 5 months ago

NareshGuru77 commented 5 months ago

Hi,

Thank you for the code and the Readme which are both very well organized. I am trying to setup the few-shot and zero-shot datasets. Is there any details I need to take account of?

Thank you!.

nnnth commented 5 months ago

Both zero-shot and few-shot datasets we use are supported by mmdetection and mmsegmentation. Please refer to the official documentation for downloading. We also provide instructions here.

One thing to note is that annotations for detection datasets need to be converted to COCO format.

As for config files, we have provided details here.

NareshGuru77 commented 5 months ago

Thank you for the details!. It helps a lot!

NareshGuru77 commented 5 months ago

Hi,

Sorry for the delayed response. Some of the few shot dataset convertor scripts do not seem to be there. tools/dataset_converters/drive.py tools/dataset_converters/loveda.py tools/dataset_converters/potsdam.py

Could you please provide them?

Thank you.

Haiyang-W commented 5 months ago

We will check it in a few days. :) Thanks for your patience.

nnnth commented 5 months ago

We have updated scripts according to mmsegmentation, you can try again now.