Closed Tongzhou0101 closed 2 years ago
Yes, the order matters, as the accuracy predictor is trained with features in this pre-defined order. Note that the accuracy predictor is not intended to predict the final accuracy, we only use its output to rank different sub-networks.
Thanks! I tried to get the final accuracy using test_attentive_nas.py, but I had some problems regarding the usage.
Can I replace the dataset with Tiny ImageNet (contains 100000 images of 200 classes (500 for each class) downsized to 64×64 colored images. Each class has 500 training images, 50 validation images and 50 test images.) for test? \ I loaded attentivenas-a0 and got its weights from pre-trained supernet to test this Tiny ImageNet, but the accuracy is very low (top-1 acc is <10%). Does this mean that the model is not able to predict the subset dataset of ImageNet?
I tried to use the original ImageNet dataset as you mentioned (~160G). Since the original training dataset is large and I comment this segment to avoid loading train_loader, then the accuracy of testing val dataset is also low (<10%).
Update: I used the original training data and run the test_attentive_nas.py without any modification, and the final accuracy is reasonable.
The following is how to convert a subnetwork configuration to accuracy predictor compatibale inputs as u provide" res = [cfg['resolution']] for k in ['width', 'depth', 'kernel_size', 'expand_ratio']: res += cfg[k] input = np.asarray(res).reshape((1, -1))
Does the order ['resolution', 'width', 'depth', 'kernel_size', 'expand_ratio'] matter?