VisionLearningGroup / visda-2018-public

45 stars 34 forks source link

VisDA2018: 2nd Edition of the Synthetic-to-Real Visual Domain Adaptation Challenge

Hi!

This is the development kit repository for the 2018 Visual Domain Adaptation (VisDA) Challenge. Here you can find details on how to download datasets, run baseline models and evaluate the perfomance of your model. The evaluation can be performed both locally and remotely on the CodaLab evaluation server. Please see the main website for competition details, rules and dates.

You can find the development kits for the two competition tracks by following these links:

If you consider using data, code or its derivatives, please consider citing:

@article{peng2017visda,
  title={Visda: The visual domain adaptation challenge},
  author={Peng, Xingchao and Usman, Ben and Kaushik, Neela and Hoffman, Judy and Wang, Dequan and Saenko, Kate},
  journal={arXiv preprint arXiv:1710.06924},
  year={2017}
}

@article{Peng2018Syn2RealAN,
  title={Syn2Real: A New Benchmark forSynthetic-to-Real Visual Domain Adaptation},
  author={Xingchao Peng and Ben Usman and Kuniaki Saito and Neela Kaushik and Judy Hoffman and Kate Saenko},
  journal={CoRR},
  year={2018},
  volume={abs/1806.09755}
}

If you find any bugs please open an issue.

Have fun!