M-Nauta / ProtoTree

ProtoTrees: Neural Prototype Trees for Interpretable Fine-grained Image Recognition, published at CVPR2021
MIT License
87 stars 17 forks source link

My final acc is only 76% #3

Closed mobulan closed 2 years ago

mobulan commented 2 years ago

Eval Epoch 95: 100% 91/91 [01:49<00:00,  1.21s/it, Batch [91/91], Acc: 0.559]
Train Epoch 96: 100% 469/469 [05:52<00:00,  1.33it/s, Batch [469/469], Loss: 0.3
Eval Epoch 96: 100% 91/91 [01:51<00:00,  1.22s/it, Batch [91/91], Acc: 0.618]
Train Epoch 97: 100% 469/469 [05:55<00:00,  1.32it/s, Batch [469/469], Loss: 0.2
Eval Epoch 97: 100% 91/91 [01:47<00:00,  1.18s/it, Batch [91/91], Acc: 0.529]
Train Epoch 98: 100% 469/469 [05:52<00:00,  1.33it/s, Batch [469/469], Loss: 0.3
Eval Epoch 98: 100% 91/91 [01:48<00:00,  1.19s/it, Batch [91/91], Acc: 0.588]
Train Epoch 99: 100% 469/469 [05:56<00:00,  1.32it/s, Batch [469/469], Loss: 0.4
Eval Epoch 99: 100% 91/91 [01:46<00:00,  1.17s/it, Batch [91/91], Acc: 0.618]
Train Epoch 100: 100% 469/469 [05:53<00:00,  1.33it/s, Batch [469/469], Loss: 0.
Eval Epoch 100: 100% 91/91 [01:47<00:00,  1.18s/it, Batch [91/91], Acc: 0.647]
Eval Epoch pruned: 100% 91/91 [01:39<00:00,  1.10s/it, Batch [91/91], Acc: 0.647
Projection: 100% 375/375 [02:51<00:00,  2.19it/s, Batch: 375/375]
Eval Epoch pruned_and_projected: 100% 91/91 [01:36<00:00,  1.06s/it, Batch [91/9
Eval Epoch pruned_and_projected: 100% 91/91 [01:33<00:00,  1.03s/it, Batch [91/9
Eval Epoch pruned_and_projected: 100% 91/91 [02:16<00:00,  1.50s/it, Batch [91/9
Fidelity: 100% 91/91 [02:47<00:00,  1.84s/it, Batch [91/91]]
```bash
In the overview table
| 85   | 0.764411 | 0.927753346 | 0.315042642 |
| ---- | -------- | ----------- | ----------- |
| 86   | 0.766483 | 0.929271055 | 0.312246734 |
| 87   | 0.768208 | 0.929870736 | 0.310522149 |
| 88   | 0.77028  | 0.928586235 | 0.310088791 |
| 89   | 0.76631  | 0.929519071 | 0.306573199 |
| 90   | 0.763721 | 0.929437633 | 0.307344964 |
| 91   | 0.764066 | 0.929333985 | 0.301229446 |
| 92   | 0.765447 | 0.929681947 | 0.299217476 |
| 93   | 0.765274 | 0.930585169 | 0.298738088 |
| 94   | 0.768381 | 0.929467247 | 0.298132296 |
| 95   | 0.765965 | 0.929104478 | 0.296845926 |
| 96   | 0.76631  | 0.929918858 | 0.295910276 |
| 97   | 0.764066 | 0.930070629 | 0.295259658 |
| 98   | 0.763548 | 0.92906746  | 0.296624792 |
| 99   | 0.764411 | 0.929082267 | 0.296345691 |
| 100  | 0.761823 | 0.929915156 | 0.294409203 |
M-Nauta commented 2 years ago

Dear Mobulan, On which dataset did you train and did you use the same parameters as specified in the paper? It might also be the case that you had a bad random seed or unfortunate weight initialization, which you could verify by training it again.

mobulan commented 2 years ago

yes, i will run it again today.

python main_tree.py --epochs 100 --log_dir /DATA/linjing/Mobulan/ProtoTree/runs/protoree_cub_11.27 --dataset CUB-200-2011 --lr 0.001 --lr_block 0.001 --lr_net 1e-5 --num_features 256 --depth 9 --net resnet50_inat --freeze_epochs 30 --milestones 60,70,80,90,100

i change the num_wokers to 4 by editing

    trainloader = torch.utils.data.DataLoader(trainset,
                                              batch_size=args.batch_size,
                                              shuffle=True,
                                              pin_memory=cuda,
                                              num_workers = 4
                                              )
    projectloader = torch.utils.data.DataLoader(projectset,
                                            #    batch_size=args.batch_size,
                                              batch_size=int(args.batch_size/4), #make batch size smaller to prevent out of memory errors during projection
                                              shuffle=False,
                                              pin_memory=cuda,
                                              num_workers = 4
                                              )
    testloader = torch.utils.data.DataLoader(testset,
                                             batch_size=args.batch_size,
                                             shuffle=False,
                                             pin_memory=cuda,
                                             num_workers = 4
                                             )