tensorflow / models

Models and examples built with TensorFlow
Other
77.01k stars 45.78k forks source link

use the mobilenet_v2_140_224 on windows + opencv4.01 error! #6549

Closed JaosonMa closed 4 years ago

JaosonMa commented 5 years ago

I want use the mobilenet_v2_1.4_224 train my data ,because i am going to use the model on the widows CPU, so i use the tensorflow's slim to train my model, train steps with sh code like this

funtu last layers. CUDA_VISIBLE_DEVICES=1 python3 train_image_classifier.py \ --train_dir=./data/flowers_2/models \ --dataset_name=flowers \ --dataset_split_name=train \ --dataset_dir=./data/flowers_2 \ --model_name=mobilenet_v2_140 \ --checkpoint_path=./data/flowers_2/m_v2_1.4_224/mobilenet_v2_1.4_224.ckpt \ --checkpoint_exclude_scopes=MobilenetV2/Logits,MobilenetV2/AuxLogits \ --trainable_scopes=MobilenetV2/Logits,MobilenetV2/AuxLogits \ --max_number_of_steps=10000 \ --batch_size=32 \ --learning_rate=0.01 \ --learning_rate_decay_type=fixed \ --save_interval_secs=60 \ --save_summaries_secs=60 \ --log_every_n_steps=100 \ --optimizer=rmsprop \ --weight_decay=0.00004 all layers CUDA_VISIBLE_DEVICES=1 python3 train_image_classifier.py \ --train_dir=./data/flowers_2/models/all \ --dataset_name=flowers \ --dataset_split_name=train \ --dataset_dir=./data/flowers_2 \ --model_name=mobilenet_v2_140 \ --checkpoint_path=./data/flowers_2/models \ --max_number_of_steps=100000 \ --batch_size=16 \ --learning_rate=0.0001 \ --learning_rate_decay_type=fixed \ --save_interval_secs=60 \ --save_summaries_secs=60 \ --log_every_n_steps=10 \ --optimizer=rmsprop \ --weight_decay=0.00004

Describe the problem

after train over, i got the mode.ckpt , then i export_graph and freeze_graph with code like this: `PRETRAINED_CHECKPOINT_DIR=./data/flowers_2/models/all TRAIN_DIR=/tmp/flowers-models/mobilenet_v1_1.0_224 DATASET_DIR=./data/flowers_2 DATASET_NAME=flowers INFER_DIR=./data/flowers_2/models/all/infer MODEL_NAME=mobilenet_v2_140

echo "create model.pb start" CUDA_VISIBLE_DEVICES=0 python3 -u export_inference_graph.py \ --model_name=${MODEL_NAME} \ --output_file=${INFER_DIR}/flowers.pb \ --dataset_name=${DATASET_NAME} \ --dataset_dir=${DATASET_DIR}

echo "start create frzee pb" CUDA_VISIBLE_DEVICES=0 python3 -u /usr/local/lib/python3.5/dist-packages/tensorflow/python/tools/freeze_graph.py \ --input_graph=${INFER_DIR}/flowers.pb \ --input_checkpoint=${PRETRAINED_CHECKPOINT_DIR}/model.ckpt-44885 \ --output_graph=${INFER_DIR}/my_freeze.pb \ --input_binary=True \ --output_node_name=MobilenetV2/Predictions/Reshape_1` then i got the flowers.pb and the my_freeze.pb

the use the same code ,the got error : image it seems like that some layers was not found in opencv4.01,so i check the two pbfile(my_freeze.pb and mobilenet_v2_1.0_224_frozen.pb) with code like this:

`mobile_net = '../data/mobilenet_v2_1.4_224/mobilenet_v2_1.4_224_frozen.pb' my_mobile_net = '../data/flowers_2/models/all/infer/my_freeze.pb' def create_graph(): with tf.gfile.FastGFile(my_mobile_net, 'rb') as f: graph_def = tf.GraphDef() graph_def.ParseFromString(f.read()) tf.import_graph_def(graph_def, name='') create_graph() tensor_name_list_my = [tensor.name for tensor in tf.get_default_graph().as_graph_def().node]

print(len(tensor_name_list)) print(len(tensor_name_list_my)) for idx,tensor_name in enumerate(tensor_name_list_my): if(idx<len(tensor_name_list)): print(idx+1,"-->",tensor_name,idx+1,"-->",tensor_name_list[idx],'\n') else: print(idx+1,"-->",tensor_name,'\n') ` the length is not same , image ... image ... image

so i think some thing was wrong with my export_inference_graph.py, can you tell me how can i got the pb just as same as the tf's mobilenet_v1_1.0224.pb,

@tensorflowbutler

tensorflowbutler commented 4 years ago

Hi There, We are checking to see if you still need help on this, as this seems to be an old issue. Please update this issue with the latest information, code snippet to reproduce your issue and error you are seeing. If we don't hear from you in the next 7 days, this issue will be closed automatically. If you don't need help on this issue any more, please consider closing this.