Closed jiangzhengkai closed 3 years ago
coco_supervision.txt
is generated by the following code:
def subsample_idx(num_all):
SupPercent = [0.01, 0.1,0.5, 1, 2, 5, 10]
run_times = 10
dict_all = {}
for sup_p in SupPercent:
dict_all[sup_p] = {}
for run_i in range(run_times):
num_label = int(sup_p / 100. * num_all)
labeled_idx = np.random.choice(range(num_all), size=num_label, replace=False)
dict_all[sup_p][run_i] = labeled_idx.tolist()
return dict_all
dict_all
is then stored in coco_supervision.txt
.
We generate this file for making sure our results are reproducible and all GPUs use the same list of labeled data when you applied distributed training on multiple GPUs. We reimplement CSD and use the same list since they do not have COCO-standard experiments in their paper and implementation. For STAC, we use the reported value in their paper.
Also, the variance of each run is not that large as we reported in the paper.
Hi, I just want to know how you generate the coco_supervision.txt file? And whether this file is consistent with other methods?