mit-han-lab / mcunet

[NeurIPS 2020] MCUNet: Tiny Deep Learning on IoT Devices; [NeurIPS 2021] MCUNetV2: Memory-Efficient Patch-based Inference for Tiny Deep Learning
https://mcunet.mit.edu
MIT License
426 stars 79 forks source link

Evaluation about mcunet-320KB(Imagenet) #11

Closed andrewwang0612 closed 1 year ago

andrewwang0612 commented 1 year ago

Thanks for the great work. I run this line to evluate the performance if this model python eval_torch.py --net_id mcunet-320kB --dataset {imagenet/} --data-dir PATH/TO/DATA/val But the accuracy just gets about 11%,

And I use this github to preprate the Imagenet dataset https://gist.github.com/antoinebrl/7d00d5cb6c95ef194c737392ef7e476a The validation just like this setting,it split to 1000 folders and each folder have about 50 images image

Could you tell me the possible reason? Or I use the wrong way to split the Imagenet on validation?

tonylins commented 1 year ago

Hi, could you change the argument to --dataset imagenet (actually I think it will raise an error with your command)?

andrewwang0612 commented 1 year ago

Oh, I haven't type it clearly. I used the argument to --dataset imagenet

tonylins commented 1 year ago

This is weird. Can you try evaluating the VWW dataset, which I have provided as a zip copy?

andrewwang0612 commented 1 year ago

I run this command and provided by yours VWW dataset image

python eval_torch.py --net_id mcunet-320kB --dataset vww --data-dir ./vww-s256/val --batch-size 4 imageStill have some problem,I haven't modify eval_torch.py .

tonylins commented 1 year ago

Hi, for vww dataset, you need to test on vww models like mcunet-320kB-vww, instead of ImageNet models mcunet-320kB. Please use the following script instead:

python eval_torch.py --net_id mcunet-320kB-vww --dataset vww --data-dir ./vww-s256/val --batch-size 4

andrewwang0612 commented 1 year ago

Thank you!!For vww dataset, I get the right accyracy. But for imagenet dataset,still have the same problem! Or I have to split 10,000 samples from the training set of ImageNet?

tonylins commented 1 year ago

Hi, thanks for confirming. I just pulled the repo and verified that I could reproduce the number. It should be related to the dataset processing. The number of iterations for testing also does not match. Can you try testing other torchvision models to see if you can get the correct results? Please see https://github.com/pytorch/examples/tree/main/imagenet and use --evaluate.

image
andrewwang0612 commented 1 year ago

Thank you so much!! For imagenet dataset that provided by you. I get the correct results. It's related to the data processing.