Open Simon4Yan opened 2 years ago
Following up on the above, the learned coco classification model is here.
The model structure is
class FT_Resnet_fea(nn.Module):
def __init__(self, mode='resnet50', num_classes=12, pretrained=True):
super(FT_Resnet_fea, self).__init__()
if mode == 'resnet50':
model = models.resnet50(pretrained=pretrained)
elif mode == 'resnet101':
model = models.resnet101(pretrained=pretrained)
elif mode == 'resnet152':
model = models.resnet152(pretrained=pretrained)
else:
model = models.resnet18(pretrained=pretrained)
self.features = nn.Sequential(
model.conv1,
model.bn1,
model.relu,
model.maxpool,
model.layer1,
model.layer2,
model.layer3,
model.layer4
)
self.num_classes = num_classes
self.num_features = model.layer4[1].conv1.in_channels
self.fc = nn.Linear(self.num_features, self.num_features // 2)
self.classifier = nn.Linear(self.num_features // 2, num_classes)
self.avg = nn.AdaptiveAvgPool2d(1)
def forward(self, x):
x = self.features(x)
x = self.avg(x).view(-1, self.num_features)
fea = self.fc(x)
x = F.relu(fea)
x = F.dropout(x, training=self.training)
output = self.classifier(x)
return output, fea
Here is an example of dataloader:
# For training
train_loader = torch.utils.data.DataLoader(
IMAGE_COCO('YOUR_PATH/coco_train_val/', 'train.txt',
transform=transforms.Compose([
transforms.Resize([256, 256]),
transforms.RandomResizedCrop(size=224),
transforms.RandomHorizontalFlip(),
transforms.ToTensor(),
transforms.Normalize(mean=[0.485, 0.456, 0.406],
std=[0.229, 0.224, 0.225])
])),
batch_size=args.batch_size, shuffle=True, **kwargs)
# For testing
test_loader = torch.utils.data.DataLoader(
IMAGE_COCO('YOUR_PATH/test_sets/', 'YOUR_PATH/test_sets/labels/i_List.txt',
transform=transforms.Compose([
transforms.Resize([256, 256]),
transforms.CenterCrop(224),
transforms.ToTensor(),
transforms.Normalize(mean=[0.485, 0.456, 0.406],
std=[0.229, 0.224, 0.225])
])),
batch_size=args.batch_size, shuffle=False, drop_last=False, **kwargs)
def make_dataset(image_list):
if len(image_list[0].split())==2:
images = [(val.split()[0], int(val.split()[1])) for val in image_list]
elif len(image_list[0].split()) > 2:
images = []
for val in image_list:
images.append([val[:-3], int(val[-3:])])
# images = [(val.split('.jpg')[0] + '.jpg', int(val.split('.jpg')[1])) for val in image_list]
return images
class IMAGE_COCO(data.Dataset):
def __init__(self, path, image_list, transform=None, target_transform=None):
super(IMAGE_COCO, self).__init__()
self.imgs = make_dataset(open(image_list).readlines())
self.path = path
self.transform = transform
self.target_transform = target_transform
def __getitem__(self, index):
"""
Args:
index (int): Index
Returns:
tuple: (image, target) where target is class_index of the target class.
"""
path, target = self.imgs[index]
img = Image.open(self.path + path).convert('RGB')
if self.transform is not None:
img = self.transform(img)
if self.target_transform is not None:
target = self.target_transform(target)
return img, target
def __len__(self):
return len(self.imgs)
If you find our project useful, please cite our works:
@inproceedings{deng2020labels,
author={Deng, Weijian and Zheng, Liang},
title = {Are Labels Always Necessary for Classifier Accuracy Evaluation?},
booktitle = {Proc. CVPR},
year = {2021},
}
@inproceedings{deng2022labels,
author={Deng, Weijian and Zheng, Liang},
title = {Are Labels Always Necessary for Classifier Accuracy Evaluation?},
booktitle = {TPAMI},
year = {2022},
}
Have a nice day! -Weijian
link has expired
@ashygsy
link has expired
Thanks for the reminder. I have fixed it. OneDrive notices me "Your organization's policy requires this link to expire after 30 days". I will find out a way to maintain the link.
Regards, Weijian
Hi. Would it be possible to refresh the link again? It seems that the link has expired again. Thank you.
Hi. Would it be possible to refresh the link again? It seems that the link has expired again. Thank you.
Thanks. I have refreshed, I will use google drive later. Best, Weijian
Would it be possible to refresh the link again? It seems that the link has expired again. Thank you very much.
Just refreshed, sorry for late response (struggling with CVPR...)
Thank you for your attention!
Please download the datasets for coco classification setup in here.
The zip file contains two parts. The first part is coco datasets: 1) a training set, 2) a validation set, 3) the validation set without background, and 4) validation sets with various backgrounds.
Some users reported that the COCO creation is slow. Here is an alternative to creating a meta-dataset: applying random image transformations to change the visual characteristics of 4) validation sets with various backgrounds. Given a validation set with a changed background, we can apply 5 random transformations to diversify it.
The users are suggested to use the way of ImageNet-C to apply transformations. ImageNet-C uses Pytorch data loader to speed up the process, please refer to the code. In our works, we use Imgaug for the transformations and there are other corruptions such as ImageNet-C-Bar.
Note that, we provide 3) the validation set without background, so the users can change the background easily based on their usage.
The second part contains three real-world test sets, 1) Pascal, 2) Caltech, and 3) ImageNet (note that, ImageNet test set is from theImageCLEF dataset). We also provide test sets with some image transformations. Enjoy!