Open cheerss opened 6 years ago
Actually, I have found that intel-caffe should be built without MLSL, which is enabled by default, so the docker image could not be used directly if I want to do quantization. I have modified the Makefile.config and re-built the intel/caffe, it works! However, I still do not know why I should offer -i(iterations) parameter as I said, does anyone knows
hello @cheerss . the calibration tool need to know iterations for inference as its value depend on the specific topology. So the tool ask the end user to provide the corresponding value.
For the MLSL issue, it had been fixed in our coming release.
I want to test the accuracy loss with 8-bit quantization, I followed the guide from https://github.com/intel/caffe/wiki/Introduction-of-Accuracy-Calibration-Tool-for-8-Bit-Inference to run my model.
I pull the docker image with intel-caffe from the docker hub and test with my deploy model, so the command is
The model is a detection model of SSD whose backbone is SqueezeNet, I am sure that the model is OK, however, when I run the model, an error occurred as follows:
I do not know what "parse_server_affinity" means so cannot find the problem, there is no useful information about the error in Google. By the way, I wonder why should I need provide the program with -i(iterations) parameter, is there any relation between iterations and accuracy loss once the caffe model has been given? and why iterations = epoch / batch_size(as here says), as I know, iterations=epoch*dataset_size/batch_size.
Thank you very much~