Closed psyhtest closed 4 years ago
First, here's the current CK environment for non-quantized MobileNet:
$ ck cat env --tags=model,tf,mlperf,mobilenet,non-quantized
local:env:63c94fe7bb972f8d
#! /bin/bash
#
# --------------------[ TensorFlow model and weights (mobilenet-v1-1.0-224-2018_08_02) ver. 1_1.0_224_2018_08_02, /home/anton/CK_REPOS/local/env/63c94fe7bb972f8d/env.sh ]--------------------
# Tags: 2018_08_02,64bits,downloaded,frozen,host-os-linux-64,image-classification,mlperf,mobilenet,mobilenet-v1,mobilenet-v1-1.0-224,model,nhwc,non-quantized,python,target-os-linux-64,tensorflowmodel,tf,tflite,v1,v1.1,v1.1.0,v1.1.0.224,v1.1.0.224.2018,v1.1.0.224.2018.8,v1.1.0.224.2018.8.2,weights
#
# CK generated script
if [ "$1" != "1" ]; then if [ "$CK_ENV_TENSORFLOW_MODEL_SET" == "1" ]; then return; fi; fi
# Soft UOA = model.tensorflow.py (439b9f1757f27091) (tensorflowmodel,model,weights,python,image-classification,tf,tflite,nhwc,mobilenet,mobilenet-v1,mobilenet-v1-1.0-224,2018_08_02,mlperf,non-quantized,frozen,downloaded,host-os-linux-64,target-os-linux-64,64bits,v1,v1.1,v1.1.0,v1.1.0.224,v1.1.0.224.2018,v1.1.0.224.2018.8,v1.1.0.224.2018.8.2)
# Host OS UOA = linux-64 (4258b5fe54828a50)
# Target OS UOA = linux-64 (4258b5fe54828a50)
# Target OS bits = 64
# Tool version = 1_1.0_224_2018_08_02
# Tool split version = [1, 1, 0, 224, 2018, 8, 2]
export CK_ENV_TENSORFLOW_MODEL_IMAGE_HEIGHT=224
export CK_ENV_TENSORFLOW_MODEL_IMAGE_WIDTH=224
export CK_ENV_TENSORFLOW_MODEL_INPUT_LAYER_NAME=input
export CK_ENV_TENSORFLOW_MODEL_MOBILENET_MULTIPLIER=1.0
export CK_ENV_TENSORFLOW_MODEL_MOBILENET_RESOLUTION=224
export CK_ENV_TENSORFLOW_MODEL_MOBILENET_VERSION=1
export CK_ENV_TENSORFLOW_MODEL_MODULE=/home/anton/CK_TOOLS/model-tf-mlperf-mobilenet-downloaded/mobilenet-model.py
export CK_ENV_TENSORFLOW_MODEL_NORMALIZE_DATA=YES
export CK_ENV_TENSORFLOW_MODEL_OUTPUT_LAYER_NAME=MobilenetV1/Predictions/Reshape_1
export CK_ENV_TENSORFLOW_MODEL_ROOT=/home/anton/CK_TOOLS/model-tf-mlperf-mobilenet-downloaded
export CK_ENV_TENSORFLOW_MODEL_SUBTRACT_MEAN=0
export CK_ENV_TENSORFLOW_MODEL_TFLITE_FILENAME=mobilenet_v1_1.0_224.tflite
export CK_ENV_TENSORFLOW_MODEL_TFLITE_FILEPATH=/home/anton/CK_TOOLS/model-tf-mlperf-mobilenet-downloaded/mobilenet_v1_1.0_224.tflite
export CK_ENV_TENSORFLOW_MODEL_TF_FROZEN_FILENAME=mobilenet_v1_1.0_224_frozen.pb
export CK_ENV_TENSORFLOW_MODEL_TF_FROZEN_FILEPATH=/home/anton/CK_TOOLS/model-tf-mlperf-mobilenet-downloaded/mobilenet_v1_1.0_224_frozen.pb
export CK_ENV_TENSORFLOW_MODEL_WEIGHTS=/home/anton/CK_TOOLS/model-tf-mlperf-mobilenet-downloaded/mobilenet_v1_1.0_224.ckpt
export CK_ENV_TENSORFLOW_MODEL_WEIGHTS_ARE_CHECKPOINTS=YES
export ML_MODEL_DATA_LAYOUT=NHWC
export CK_ENV_TENSORFLOW_MODEL_SET=1
mo_params
:
data_type: FP32 (or FP16)
input_shape: (1, 224, 224, 3)
output: softmax_tensor
data_type: FP16
input_shape: (1, 224, 224, 3)
For MobileNet:
data_type
is FP16
, so we should be make it customizable (FP32
by default).output
should be MobilenetV1/Predictions/Reshape_1
.preprocessing
:
- type: resize
size: 256
aspect_ratio_scale: greater
- type: crop
size: 224
- type: normalization
mean: 123, 117, 104
- type: bgr_to_rgb
- type: resize
size: 256
- type: crop
size: 224
- type: normalization
mean: 127.5, 127.5, 127.5
std: 127.5
For MobileNet:
aspect_ratio_scale
?bgr_to_rgb
is required. Maybe it can be skipped if we skip reversing the channels during conversion?export CK_ENV_TENSORFLOW_MODEL_NORMALIZE_DATA=YES
...
export CK_ENV_TENSORFLOW_MODEL_SUBTRACT_MEAN=0
For ResNet:
std
, what should be the default?metrics
:
- name: accuracy @ top1
type: accuracy
top_k: 1
- name: accuracy @ top5
type: accuracy
top_k: 5
By adding support for MobileNet, I've broken ResNet a bit :)
$ ck run program:mlperf-inference-v0.5 --cmd_key=image-classification --repetitions=1 --skip_print_timers \
--env.CK_LOADGEN_SCENARIO=Offline --env.CK_LOADGEN_MODE=Accuracy --env.CK_LOADGEN_DATASET_SIZE=500 \
--env.CK_OPENVINO_NTHREADS=$NPROCS --env.CK_OPENVINO_NSTREAMS=$NPROCS --env.CK_OPENVINO_NIREQ=$NPROCS \
--env.CK_OPENVINO_MODEL_NAME=mobilenet
...
accuracy=68.600%, good=343, total=500
$ ck run program:mlperf-inference-v0.5 --cmd_key=image-classification --repetitions=1 --skip_print_timers \
--env.CK_LOADGEN_SCENARIO=Offline --env.CK_LOADGEN_MODE=Accuracy --env.CK_LOADGEN_DATASET_SIZE=500 \
--env.CK_OPENVINO_NTHREADS=$NPROCS --env.CK_OPENVINO_NSTREAMS=$NPROCS --env.CK_OPENVINO_NIREQ=$NPROCS \
--env.CK_OPENVINO_MODEL_NAME=resnet50
...
accuracy=74.200%, good=371, total=500
Everything's fixed and runs well:
$ ck run program:mlperf-inference-v0.5 --cmd_key=image-classification --skip_print_timers \
--env.CK_LOADGEN_SCENARIO=Offline --env.CK_LOADGEN_MODE=Accuracy --env.CK_LOADGEN_DATASET_SIZE=50000 \
--env.CK_OPENVINO_NTHREADS=$NPROCS --env.CK_OPENVINO_NSTREAMS=$NPROCS --env.CK_OPENVINO_NIREQ=$NPROCS \
--env.CK_OPENVINO_MODEL_NAME=mobilenet
...
accuracy=71.466%, good=35733, total=50000
$ ck run program:mlperf-inference-v0.5 --cmd_key=image-classification --skip_print_timers \
--env.CK_LOADGEN_SCENARIO=Offline --env.CK_LOADGEN_MODE=Accuracy --env.CK_LOADGEN_DATASET_SIZE=50000 \
--env.CK_OPENVINO_NTHREADS=$NPROCS --env.CK_OPENVINO_NSTREAMS=$NPROCS --env.CK_OPENVINO_NIREQ=$NPROCS \
--env.CK_OPENVINO_MODEL_NAME=resnet50
...
accuracy=76.268%, good=38134, total=50000
Converting MobileNet should be similar to converting ResNet (https://github.com/ctuning/ck-openvino/issues/6), as it should be just a matter of configuration. However, several differences exist between Intel's sample configuration files resnet_v1.5_50.yml (raw) and mobilenet_v1_cal_list_1.yml (raw), which we detail below.