tensorflow / models

Models and examples built with TensorFlow
Other
76.99k stars 45.78k forks source link

Failed to reproduce frozen inference graph as in models zoo #5640

Open adithya-p opened 5 years ago

adithya-p commented 5 years ago

System information

Describe the problem

Describe the problem clearly here. Be sure to convey here why it's a bug in TensorFlow or a feature request.

These are the exact steps I am following:

Source code / logs

Include any logs or source code that would be helpful to diagnose the problem. If including tracebacks, please include the full traceback. Large logs and files should be attached. Try to provide a reproducible test case that is the bare minimum necessary to generate the problem.

Link to the config file: ssd_inception_v2_coco.config

When continued to convert the frozen graph to UFF using convert_to_uff.py, the output log files are as follows: For the original model, given in model zoo: original-output.log For custom model: custom-output.log

Can someone help me identify the exact issue? Where am I going wrong?

rnekk2 commented 5 years ago

@wt-huang we are able to reproduce this same issue on multiple tensorflow versions. Any update on this ?

mkh-github commented 5 years ago

@wt-huang any update on this?

visignibraem commented 5 years ago

Did someone get around this?

mkh-github commented 5 years ago

Not that I know of.

On Wed, Mar 27, 2019, 11:28 AM visignibraem notifications@github.com wrote:

Did someone get around this?

— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/tensorflow/models/issues/5640#issuecomment-477207918, or mute the thread https://github.com/notifications/unsubscribe-auth/AYo1gL-rRWWMcfoYDdq9fdJJFnfd8Ufiks5va44EgaJpZM4YBzx5 .

Tantael commented 5 years ago
  1. reset tensorflow models where you will be executing graph export to commit ae0a9409212d0072938fa60c9f85740bb89ced7e
  2. recompile object detection "protoc object_detection/protos/*.proto --python_out=."
  3. comment out "# override_base_feature_extractor_hyperparams: true" in pipeline config
  4. export graph frozen graphagain using script:

INPUT_TYPE=image_tensor PIPELINE_CONFIG_PATH=$1 TRAINED_CKPT_PREFIX=$2 EXPORT_DIR=$3 echo ${PIPELINE_CONFIG_PATH} ${TRAINED_CKPT_PREFIX} ${EXPORT_DIR} python3 models/research/object_detection/export_inference_graph.py \ --input_type=${INPUT_TYPE} \ --pipeline_config_path=${PIPELINE_CONFIG_PATH} \ --trained_checkpoint_prefix=${TRAINED_CKPT_PREFIX} \ --output_directory=${EXPORT_DIR}

  1. execute uff_ssd example again
siddas27 commented 4 years ago

@Tantael commenting out "# override_base_feature_extractor_hyperparams: true" in pipeline config, throws the following error

Traceback (most recent call last):
  File "train.py", line 163, in <module>
    tf.app.run()
  File "/opt/anaconda/envs/newod/lib/python3.6/site-packages/tensorflow/python/platform/app.py", line 40, in run
    _run(main=main, argv=argv, flags_parser=_parse_flags_tolerate_undef)
  File "/opt/anaconda/envs/newod/lib/python3.6/site-packages/absl/app.py", line 299, in run
    _run_main(main, args)
  File "/opt/anaconda/envs/newod/lib/python3.6/site-packages/absl/app.py", line 250, in _run_main
    sys.exit(main(argv))
  File "train.py", line 159, in main
    worker_job_name, is_chief, FLAGS.train_dir)
  File "/home/sid/models/research/object_detection/trainer.py", line 193, in train
    detection_model = create_model_fn()
  File "/home/sid/models-master/research/object_detection/builders/model_builder.py", line 634, in build
    add_summaries)
  File "/home/sid/models-master/research/object_detection/builders/model_builder.py", line 245, in _build_ssd_model
    is_training=is_training)
  File "/home/sid/models-master/research/object_detection/builders/model_builder.py", line 221, in _build_ssd_feature_extractor
    return feature_extractor_class(**kwargs)
  File "/home/sid/models-master/research/object_detection/models/ssd_inception_v2_feature_extractor.py", line 78, in __init__
    raise ValueError('SSD Inception V2 feature extractor always uses'
ValueError: SSD Inception V2 feature extractor always usesscope returned by `conv_hyperparams_fn` for both the base feature extractor and the additional layers added since there is no arg_scope defined for the base feature extractor.

and without comment, the graph is not reproduced.

Tantael commented 4 years ago

@Tantael commenting out "# override_base_feature_extractor_hyperparams: true" in pipeline config, throws the following error

Traceback (most recent call last):
  File "train.py", line 163, in <module>
    tf.app.run()
  File "/opt/anaconda/envs/newod/lib/python3.6/site-packages/tensorflow/python/platform/app.py", line 40, in run
    _run(main=main, argv=argv, flags_parser=_parse_flags_tolerate_undef)
  File "/opt/anaconda/envs/newod/lib/python3.6/site-packages/absl/app.py", line 299, in run
    _run_main(main, args)
  File "/opt/anaconda/envs/newod/lib/python3.6/site-packages/absl/app.py", line 250, in _run_main
    sys.exit(main(argv))
  File "train.py", line 159, in main
    worker_job_name, is_chief, FLAGS.train_dir)
  File "/home/sid/models/research/object_detection/trainer.py", line 193, in train
    detection_model = create_model_fn()
  File "/home/sid/models-master/research/object_detection/builders/model_builder.py", line 634, in build
    add_summaries)
  File "/home/sid/models-master/research/object_detection/builders/model_builder.py", line 245, in _build_ssd_model
    is_training=is_training)
  File "/home/sid/models-master/research/object_detection/builders/model_builder.py", line 221, in _build_ssd_feature_extractor
    return feature_extractor_class(**kwargs)
  File "/home/sid/models-master/research/object_detection/models/ssd_inception_v2_feature_extractor.py", line 78, in __init__
    raise ValueError('SSD Inception V2 feature extractor always uses'
ValueError: SSD Inception V2 feature extractor always usesscope returned by `conv_hyperparams_fn` for both the base feature extractor and the additional layers added since there is no arg_scope defined for the base feature extractor.

Have you checked out commit I described?

siddas27 commented 4 years ago

Yes @Tantael

Ram-Godavarthi commented 4 years ago

@siddas27 Did you solve the issue??

I want to run inference using C++ sampleUffSSD.cpp in TensorRT. Any solutions to this??