jakc4103 / DFQ

PyTorch implementation of Data Free Quantization Through Weight Equalization and Bias Correction.
MIT License
258 stars 45 forks source link

what's the dataset used #3

Closed evi-Genius closed 4 years ago

evi-Genius commented 4 years ago

I have tested the base model of MobilnetV2, but the num_correct is always zero every batchs. The val dataset I use is ILSVRC2017_CLS-LOC, and the struction of val folder is n02027492 n02123394 n02794156 n03417042 n03920288 n04423845 n13133613 n02028035 n02123597 n02795169 n03424325 n03924679 n04428191 n15075141 n02033041 n02124075 n02797295 n03425413 n03929660 n04429376 ..... Would you have any idea?

jakc4103 commented 4 years ago

I use this to process imagenet val set.

evi-Genius commented 4 years ago

I use this to process imagenet val set.

I just find that if I test base model with merge_batchnorm:python man_cls.py , the acc is zero. When I comment out model = merge_batchnorm(model, graph, bottoms, targ_layer),the result seems correct

jakc4103 commented 4 years ago

I use this to process imagenet val set.

I just find that if I test base model with merge_batchnorm:python man_cls.py , the acc is zero. When I comment out model = merge_batchnorm(model, graph, bottoms, targ_layer),the result seems correct

Thats weird. I get correct results with and without model = merge_batchnorm(model, graph, bottoms, targ_layer), and the results are identical. If you run FP32 model, that should not influence the inference results.

evi-Genius commented 4 years ago

Well I find the reason, I have edit the code of merge bn and it cause the wrong result..