Simon4Yan / Meta-set

Automatic model evaluation (AutoEval) in CVPR'21&TPAMI'22
MIT License
35 stars 5 forks source link
generalization model-decision unsupervised-classifier-evaluation

Are Labels Always Necessary for Classifier Accuracy Evaluation?

[Paper] [Project]

PyTorch Implementation

This repository contains:

Please follow the instruction below to install it and run the experiment demo.

Prerequisites

Getting started

  1. Install dependencies

    # COCOAPI
    cd $DIR/libs
    git clone https://github.com/cocodataset/cocoapi.git
    cd cocoapi/PythonAPI
    python setup.py build_ext install
    
    1. Creat Meta-set
      # By default it creates 300 sample sets
      python meta_set/main.py
    2. Learn classifier
      # Save as "PROJECT_DIR/learn/mnist_cnn.pt"
      python learn/train.py
    3. Test classifier on Meta-set
      # Get "PROJECT_DIR/learn/accuracy_mnist.npy" file
      python learn/many_test.py
    4. Calculate FD on Meta-set
      # Get "PROJECT_DIR/FD/fd_mnist.npy" file
      python FD/many_fd.py
    5. Linear regression
      # You will see linear_regression_train.png;
      # then check if FD and Accuracy have a linear relationship;
      # If so, it is all good, and please go back to step 1 and create 3000 sample sets.
      python FD/linear_regression.py
    6. Network regression
      # Please follow the instructions in the script to train the model
      # Save as "PROJECT_DIR/FD/mnist_regnet.pt"
      python FD/train_regnet.py

Citation

If you use the code in your research, please cite:

    @inproceedings{deng2020labels,
    author={Deng, Weijian and Zheng, Liang},
    title     = {Are Labels Always Necessary for Classifier Accuracy Evaluation?},
    booktitle = {Proc. CVPR},
    year      = {2021},
    }

License

MIT