krumo / Domain-Adaptive-Faster-RCNN-PyTorch

Domain Adaptive Faster R-CNN in PyTorch
MIT License
307 stars 68 forks source link

Which package of cityscapes dataset should I use? #25

Closed PerryL7s closed 3 years ago

PerryL7s commented 3 years ago

Hi, thanks for your excellent work. I have never accessed cityscapes dataset before, and the official download page really confuses me since there're so many individual packages (sub-dataset?) like gtFine, gtCoarse, leftImg8bit etc. I'm wondering which package should I download for this project? Are the data structures of these packages actually the same?

Same question comes with foggy_dataset, many thanks :)

PerryL7s commented 3 years ago

Does it make sense if I choose leftImg8bit_trainvaltest (5000 images) for both cityscapes and foggy_cityscapes dataset?

PerryL7s commented 3 years ago

It turns out that gtFine is the annotation data for leftImg8bit. Guess gtFine, leftImg8bit & leftImg8bit_foggy would work.

krumo commented 3 years ago

It turns out that gtFine is the annotation data for leftImg8bit. Guess gtFine, leftImg8bit & leftImg8bit_foggy would work.

Thanks for your interest! You are right. gtFine contains ground truth labels for training set and validation set in leftImg8bit & leftImg8bit_foggy. In the Cityscapes to Foggy Cityscapes setting, we use labeled training set of Cityscapes as source domain training data, unlabeled training set of Foggy Cityscapes as target domain training data. The performance on validation set of Foggy Cityscapes is reported to measure the effectiveness of domain adaptation.

kxgong commented 2 years ago

It turns out that gtFine is the annotation data for leftImg8bit. Guess gtFine, leftImg8bit & leftImg8bit_foggy would work.

Thanks for your interest! You are right. gtFine contains ground truth labels for training set and validation set in leftImg8bit & leftImg8bit_foggy. In the Cityscapes to Foggy Cityscapes setting, we use labeled training set of Cityscapes as source domain training data, unlabeled training set of Foggy Cityscapes as target domain training data. The performance on validation set of Foggy Cityscapes is reported to measure the effectiveness of domain adaptation.

Hello, what is the fog factor of foggy cityscapes used for training & testing? 0.005, 0.01 or 0.02, thank you!!!

krumo commented 2 years ago

It turns out that gtFine is the annotation data for leftImg8bit. Guess gtFine, leftImg8bit & leftImg8bit_foggy would work.

Thanks for your interest! You are right. gtFine contains ground truth labels for training set and validation set in leftImg8bit & leftImg8bit_foggy. In the Cityscapes to Foggy Cityscapes setting, we use labeled training set of Cityscapes as source domain training data, unlabeled training set of Foggy Cityscapes as target domain training data. The performance on validation set of Foggy Cityscapes is reported to measure the effectiveness of domain adaptation.

Hello, what is the fog factor of foggy cityscapes used for training & testing? 0.005, 0.01 or 0.02, thank you!!!

Hi, 0.02 is used when I perform domain adaptation on foggy cityscapes.

kxgong commented 2 years ago

It turns out that gtFine is the annotation data for leftImg8bit. Guess gtFine, leftImg8bit & leftImg8bit_foggy would work.

Thanks for your interest! You are right. gtFine contains ground truth labels for training set and validation set in leftImg8bit & leftImg8bit_foggy. In the Cityscapes to Foggy Cityscapes setting, we use labeled training set of Cityscapes as source domain training data, unlabeled training set of Foggy Cityscapes as target domain training data. The performance on validation set of Foggy Cityscapes is reported to measure the effectiveness of domain adaptation.

Hello, what is the fog factor of foggy cityscapes used for training & testing? 0.005, 0.01 or 0.02, thank you!!!

Hi, 0.02 is used when I perform domain adaptation on foggy cityscapes.

Hi, thanks for your quick reply.