Closed Seong-min-Park closed 4 years ago
Thanks for your interest. If you are referring the last line of Table 2, please run:
CUDA_VISIBLE_DEVICES=0,1 bash ./scripts-search/search-shape-cifar.sh cifar100 ResNet32 CIFARX 0.6 -1
It will start the TAS searching procedure for ResNet32 on CIFAR-100. 0.6 indicates the expected ratio of FLOPs.
Thank you for your reply. but I've already done as you suggested. After conducting three tests each, 71.7 percent, 71.46 percent, and 71.85 percent, respectively, looked at the accuracy after KD-training. Does not match the last line of Table 1.
Would you mind to share the full log and let me know the FLOPs of your searched model?
Honestly speaking, I'm not sure the reason. In my experiments, I usually use 0.57 instead of 0.6 for the expected FLOP ratio. I will have a try and let you know the my new experimental results.
FYI: The config of the searched architecture reported in my paper is at https://github.com/D-X-Y/AutoDL-Projects/tree/master/configs/NeurIPS-2019 Some searching logs in resnet110 and resnet164 are at https://drive.google.com/open?id=1h0RPrbXL-79U-CBos1wOst60oNxOkrWS
I have question regarding the "xblocks" varaible. What does the xblocks vairable means in different configuration files?
@atrah22 it means the number of residual blocks in each stage.
@D-X-Y Thankyou. I am trying to prune Inception-v3 models using TAS.
@atrah22 Cool, thanks for applying TAS on other models! If you have any questions, please let me know.
Thanks for your awesome work! After reading your paper TAS, I ran the code for table 1(The accuracy on CIFAR-100 when pruning about 40%FLOPs of ResNet-32). However, I did not get the accuracy of the table. If you don't mind, can you tell me the configuration of the experiment?