The 2018 competition is part of the FGVC^5 workshop at CVPR. Our sponsor, the Xingse App 形色 (Chinese version) & PictureThis App (English version), has provided a dataset from a carefully curated database containing over 669,000 annotated flower images from 997 flower species.
Please open an issue if you have questions or problems with the dataset.
We are using Kaggle to host the leaderboard. Checkout the competition page here:
Data Released | April 27, 2018 |
Submission Deadline | June 8, 2018 |
Winners Announced | June 22, 2018 |
There are a total of 997 flower species in the dataset, with 669,304 training and validation images. The testing set contains 12,961 images.
We use top-1 error rate as the evaluation metric. For each image , an algorithm will produce 1 label . For this competition each image has one ground truth label , and the error for that image is:
Where
The overall error score for an algorithm is the average error over all test images:
Participants are restricted to train their algorithms on the 2018 FGVCx Flower Classification competition train and validation sets. Pretrained models may be used to construct the algorithms (e.g. ImageNet pretrained models) as long as participants do not actively collect additional data for the target species in the 2018 FGVCx Flower Classification competition. Please specify any and all external data used for training when uploading results.
The general rule is that we want participants to use only the provided training and validation images to train a model to classify the test images. We do not want participants crawling the web in search of additional data for the target categories. Participants should be in the mindset that this is the only data available for those categories.
Participants are allowed to collect additional annotations (e.g. bounding boxes) on the provided training and validation sets. Teams should specify that they collected additional annotations when submitting results.
We closely follow the annotation format of the COCO dataset. For possibly better identification of flower species, extra infomation are provided:
The annotations are stored in the JSON format and are organized as follows:
{
"info" : info,
"images" : [image],
"categories" : [category],
"annotations" : [annotation],
"licenses" : [license]
}
info{
"year" : int,
"version" : str,
"description" : str,
"contributor" : str,
"url" : str,
"date_created" : datetime,
}
image{
"id" : int,
"width" : int,
"height" : int,
"file_name" : str,
"license" : int,
"rights_holder" : str
"upload_latitude": float
"upload_longitude": float
"upload_date": str
}
category{
"id" : int,
"genus": str
"family": str
"name" : str,
}
annotation{
"id" : int,
"image_id" : int,
"category_id" : int
}
license{
"id" : int,
"name" : str,
"url" : str
}
The submission format for the Kaggle competition is a csv file with the following format:
id,predicted
12345, 23
67890, 42
The id
column corresponds to the test image id. The predicted
column corresponds to 1 predicted category id. You should have one row for each test image.
By downloading this dataset you agree to the following terms:
Download the dataset files from Kaggle competition page:
For participants in China, the downloading from Kaggle might be very slow. Please feel free to use the following link: