Hlings / AcroFOD

(ECCV2022) The official PyTorch implementation of the "AcroFOD: An Adaptive Method for Cross-domain Few-shot Object Detection".
Apache License 2.0
54 stars 7 forks source link

How to select images for the few shot settings in cityscapes_foggy dataset #14

Closed SumayyaInayat closed 9 months ago

SumayyaInayat commented 11 months ago

Hi, Hope you are doing well. I read this paper and it is amazing too as AsyFod and more importantly I could run the code on 11 GB gpu. I am running your code on a custom dataset. I have preprocessed the source data following the cityscapes_to_yolo.py but now I am confused how to preprocess the target dataset and more importantly how you did it in this work.

I am new to few shot domain adaptation so not able to figure out how to reshape the target domain dataset into few shot settings. Please help me in this, it will be perfect if you can provide me the foggy_cityscapes_to_yolo.py file.

Thanks alot !

Hlings commented 11 months ago

Hello. The format of Foggy Cityscapes dataset is the same as Cityscapes. You can refer to cityscapes_to_yolo.py to process foggy version of cityscapes :)

SumayyaInayat commented 11 months ago

Hi, Thanks alot for giving such a quick response😊 Yes I have converted the foggy version following the foggy to Yolo file and trained for some epochs just to see if things are working well.

But my target dataset had all the images not fewer per class. That's why I asked for the number of images you kept per class. In paper, it is written that images with highest fogg were chosen. Since I am new to few shot, I just need some more details on that.

Thanks!

On Sat, 21 Oct 2023, 2:14 am Yipeng Gao, @.***> wrote:

Hello. The format of Foggy Cityscapes dataset is the same as Cityscapes. You can refer to cityscapes_to_yolo.py to process foggy version of cityscapes :)

— Reply to this email directly, view it on GitHub https://github.com/Hlings/AcroFOD/issues/14#issuecomment-1773403486, or unsubscribe https://github.com/notifications/unsubscribe-auth/AO26XGM6ZL3JUBYABGN6SELYALSUFAVCNFSM6AAAAAA6I6EAVCVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTONZTGQYDGNBYGY . You are receiving this because you authored the thread.Message ID: @.***>

Hlings commented 11 months ago

Hi, You can refer to code here for selecting a specific number of target domain dataset. And use the selected sub-dataset for training :)

Hlings commented 11 months ago

Hi! I find I probably misunderstood your requirement :( I randomly select 8 images across the whole dataset. The samples are not choiced by classes.

SumayyaInayat commented 11 months ago

Hi, Yes you are right!! This is what I wanted to know that whether you have chosen images based upon the classes or completely random. Thanks alot for reconsidering my question. It is of great help to me. But since I am running your code on a custom dataset so I have few question, please guide me in this.

if completely random 8-images set is chosen, than is there any such assumption that every sample selected must be having all the instances of the classes in the dataset or at least there are few instances of all the classes in chosen 8-images collectively?

If above assumption is not made, and there are chances of missing a particular class in target 8-image set, then how do you make up for an unseen class in test data?

Thanks!

Hlings commented 11 months ago

Yeah. I practically met the same problem. I tried to solve this problem by manually selecting the images to comprise each target class sampled. But, I think it's ok that some target classes are missing since the main problem is the domain gap, and the training pipeline uses the source data, which has all of the classes during training. You can try both methods and share the results here to help us better understand this task thx :)

SumayyaInayat commented 11 months ago

ok that sounds cool now!!! Sure I will share the results. Thanks alot for helping me!