Closed tcmyxc closed 1 year ago
use python main.py --dataset cifar10 -a resnet32 --num_classes 10 --imbanlance_rate 0.01 --beta 0.5 --lr 0.01 --epochs 200 -b 64 --momentum 0.9 --weight_decay 5e-3 --resample_weighting 0.0 --label_weighting 1.2 --contrast_weight 1
For other imbanlance rates, does the config need to be modified?
For example, is the weight of contrast learning loss still 1?
in test
file, the func eval_training
is not correct, you should consider batch size.
you can see my code:
model.eval()
size = len(val_loader.dataset)
correct = 0
for i, (inputs, labels) in enumerate(val_loader):
inputs, labels = inputs.cuda(), labels.cuda()
with torch.no_grad():
logits = model(inputs,train=False)
correct += (logits.argmax(1) == labels).type(torch.float).sum().item()
top1, top5 = accuracy(logits.data, labels.data, topk=(1,5))
output = 'Test: ' + str(i) +' Prec@1: ' + str(top1.item())
print(output, end="\r")
correct /= size
return correct
Have you run it? Does the modified eval_training
have a big impact?
Have you run it? Does the modified
eval_training
have a big impact?
The experimental results are similar to those provided by the repo, but the script for evaluating weights afterwards is incorrect
Please set the batch size to 64 for both training and testing.
Please set the batch size to 64 for both training and testing.
There is really a problem with your test weight code, you can check. For the same checkpoint file, the results of your test code are different every time.
测试脚本里面,你每次都是把准确率存起来,然后求和,然后除以列表长度,这是不对的。你应该把每个小批量预测正确的数量存起来,在最后求和(或者一直累计),之后再除以数据集样本总量。你的训练脚本就没有这个问题。一个固定的模型权重文件,对应的ACC已经确定了,不可能随着你验证的时候设置的批量大小变化,也不会随着时间变化。
the train code is right.
Thank you for your question. We have updated the code.
I try your code in your repo default config, but I get the acc in cifar10-lt-ir-100 is 87.00%
Is it the software version?