Open loweew opened 7 years ago
@pracheer
still hitting this issue - any guidance?
Hi @loweew
Can you provide more details about your fine-tuned model? For instance, what command you used in order to fine-tune? I'm assuming when you say you used fine-tune.py you used this.
Also, will it be possible to share your fine-tuned model?
Also, while we are at this, are you fine-tuning this model: http://data.mxnet.io/models/imagenet-11k-place365-ch/ (resnet-50) so that I can try reproducing this at my end?
python fine-tune.py --pretrained-model imagenet11k-place365ch-resnet-50 --data-train train.rec --data-val val.rec --batch-size 360 --num-classes 227 --num-examples 115000 --data-nthreads 24 --lr 0.001 --lr-factor 0.1 --lr-step-epochs 30,60,90 --mom 0.9 --wd 0.0001 --disp-batches 500 --top-k 5 --max-random-h 30 --max-random-s 30 --max-random-aspect-ratio 0.30 --max-random-rotate-angle 90 --max-random-shear-ratio 0.15 --max-random-scale 2 --num-epochs 150 --gpus '0,1,2,3,4,5,6,7' --model-prefix ./imagenet11k-places-resnet-50
Unfortunately, I cannot provide the model. I would imagine fine-tuning on any data set for a few epochs where number of classes != 1000 should be sufficient to repro the issue, but I haven't yet done that.
any update here? do you need me to provide a model for you to move forward?
@apache/mxnet-committers: This issue has been inactive for the past 90 days. It has no label and needs triage.
For general "how-to" questions, our user forum (and Chinese version) is a good place to get help.
Sorry for the delay, @loweew. A model would be great!
@loweew are you still seeing this issue? please share the model for further debug if the issue still occurs
Hi @loweew, I tried to reproduce this issue on this model: http://data.mxnet.io/models/imagenet-11k-place365-ch/ (resnet-50) I was not able to reproduce it. Your exact model and scripts would have helped to debug the issue you are facing further. For now, I am closing it as I cannot reproduce. Please feel free to reopen if closed in error or if you still encounter this issue. :) Thanks!
The right way to do pre-processing is bgrConverter(image) scale + bias,so for example, if your params are '{"is_bgr":True,"red_bias":-128,"blue_bias":-128,"green_bias":-128,"image_scale":0.0078125,"image_input_names":"data"}' , you are doing: image 1/128.0 + (-128). If you set is_bgr = False, you need to change the order for bias, which means you need to give the real red bias to blue_bias, the same as blue bias to red_bias.
mxnet_coreml_converter.py --model-prefix='squeezenet_v1.1' --epoch=0 --input-shape='{"data":"3,224,224"}' --pre-processing-arguments='{"red_bias":127,"blue_bias":117,"green_bias":103,"image_input_names":"data"}' --output-file="squeezenet_v11.mlmodel" I find that if you use this command, the transform params are not worked, which means all the params are zeros now(is_bgr, bias and scale). You need change the source code of coremltools: set_pre_processing_parameters() fuction in coremltools/models/neural_network/builder.py: if not isinstance(is_bgr, dict):
is_bgr = dict.fromkeys([image_input_names], is_bgr)
if not isinstance(red_bias, dict):
#red_bias = dict.fromkeys(image_input_names, red_bias)
red_bias = dict.fromkeys([image_input_names], red_bias)
if not isinstance(blue_bias, dict):
#blue_bias = dict.fromkeys(image_input_names, blue_bias)
blue_bias = dict.fromkeys([image_input_names], blue_bias)
if not isinstance(green_bias, dict):
#green_bias = dict.fromkeys(image_input_names, green_bias)
green_bias = dict.fromkeys([image_input_names], green_bias)
if not isinstance(gray_bias, dict):
#gray_bias = dict.fromkeys(image_input_names, gray_bias)
gray_bias = dict.fromkeys([image_input_names], gray_bias)
if not isinstance(image_scale, dict):
#image_scale = dict.fromkeys(image_input_names, image_scale)
image_scale = dict.fromkeys([image_input_names], image_scale)
All you need to do is add [], otherwise the dict may not be right, and then the transform is all zero. And by the way the bias is need to be negtive if you mean: image - bias.
I have successfully converted the squeezenet and resnet50 models from the examples to CoreML using mxnet-to-coreml. However, when converting a model after fine-tuning using my own data, the predictions are seemingly random. The model is fine-tuned using finetune.py from the examples. The model performs well prior to conversion to CoreML. After conversion to CoreML, the model predicts the same probabilities regardless of the image. The pre-trained model I'm using for fine-tuning is the imagenet11k-places resnet50 model.
I've tried:
subtracting channel biases as is performed during fine-tuning. (--pre-processing-arguments='{"image_input_names":"data","red_bias":123.68,"blue_bias":103.939,"green_bias":116.779}')
subtracting channel biases and scaling 1/255 (--pre-processing-arguments='{"image_input_names":"data","red_bias":123.68,"blue_bias":103.939,"green_bias":116.779, "image_scale":0.00392156862}')
subtracting scaled channel biases because I was unsure about when coreml performed the scaling (--pre-processing-arguments='{"image_input_names":["data"],"red_bias":0.485019,"blue_bias":0.407603,"green_bias":0.457956, "image_scale":0.00392156862}')
not scaling or biasing channels
Has anyone successfully converted a model after fine-tuning using a different data set? Any ideas would be greatly appreciated. I'm fairly certain there's something simple that I'm overlooking...
I've also examined the converted model using Model_pb2 to make sure the preprocessing flags are being respected, and they appear to be:
print(model.neuralNetworkClassifier.preprocessing)
[featureName: "data" scaler { channelScale: 0.00380000006407 blueBias: 103.939 greenBias: 116.779 redBias: 123.68 } ]
here's the entire cmd line:
mxnet_coreml_converter.py --model-prefix='imagenet11k-places-resnet-50' --epoch=47 --input-shape='{"data":"3,224,224"}' --mode=classifier --class-labels myclass_labels.txt --output-file="mxnetimagenet11kplaces50resnet.mlmodel" --pre-processing-arguments='{"image_input_names":"data","red_bias":123.68,"blue_bias":103.939,"green_bias":116.779, "image_scale":0.00392156862}'