Open adithya-p opened 5 years ago
@wt-huang we are able to reproduce this same issue on multiple tensorflow versions. Any update on this ?
@wt-huang any update on this?
Did someone get around this?
Not that I know of.
On Wed, Mar 27, 2019, 11:28 AM visignibraem notifications@github.com wrote:
Did someone get around this?
— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/tensorflow/models/issues/5640#issuecomment-477207918, or mute the thread https://github.com/notifications/unsubscribe-auth/AYo1gL-rRWWMcfoYDdq9fdJJFnfd8Ufiks5va44EgaJpZM4YBzx5 .
INPUT_TYPE=image_tensor PIPELINE_CONFIG_PATH=$1 TRAINED_CKPT_PREFIX=$2 EXPORT_DIR=$3 echo ${PIPELINE_CONFIG_PATH} ${TRAINED_CKPT_PREFIX} ${EXPORT_DIR} python3 models/research/object_detection/export_inference_graph.py \ --input_type=${INPUT_TYPE} \ --pipeline_config_path=${PIPELINE_CONFIG_PATH} \ --trained_checkpoint_prefix=${TRAINED_CKPT_PREFIX} \ --output_directory=${EXPORT_DIR}
@Tantael commenting out "# override_base_feature_extractor_hyperparams: true" in pipeline config, throws the following error
Traceback (most recent call last):
File "train.py", line 163, in <module>
tf.app.run()
File "/opt/anaconda/envs/newod/lib/python3.6/site-packages/tensorflow/python/platform/app.py", line 40, in run
_run(main=main, argv=argv, flags_parser=_parse_flags_tolerate_undef)
File "/opt/anaconda/envs/newod/lib/python3.6/site-packages/absl/app.py", line 299, in run
_run_main(main, args)
File "/opt/anaconda/envs/newod/lib/python3.6/site-packages/absl/app.py", line 250, in _run_main
sys.exit(main(argv))
File "train.py", line 159, in main
worker_job_name, is_chief, FLAGS.train_dir)
File "/home/sid/models/research/object_detection/trainer.py", line 193, in train
detection_model = create_model_fn()
File "/home/sid/models-master/research/object_detection/builders/model_builder.py", line 634, in build
add_summaries)
File "/home/sid/models-master/research/object_detection/builders/model_builder.py", line 245, in _build_ssd_model
is_training=is_training)
File "/home/sid/models-master/research/object_detection/builders/model_builder.py", line 221, in _build_ssd_feature_extractor
return feature_extractor_class(**kwargs)
File "/home/sid/models-master/research/object_detection/models/ssd_inception_v2_feature_extractor.py", line 78, in __init__
raise ValueError('SSD Inception V2 feature extractor always uses'
ValueError: SSD Inception V2 feature extractor always usesscope returned by `conv_hyperparams_fn` for both the base feature extractor and the additional layers added since there is no arg_scope defined for the base feature extractor.
and without comment, the graph is not reproduced.
@Tantael commenting out "# override_base_feature_extractor_hyperparams: true" in pipeline config, throws the following error
Traceback (most recent call last): File "train.py", line 163, in <module> tf.app.run() File "/opt/anaconda/envs/newod/lib/python3.6/site-packages/tensorflow/python/platform/app.py", line 40, in run _run(main=main, argv=argv, flags_parser=_parse_flags_tolerate_undef) File "/opt/anaconda/envs/newod/lib/python3.6/site-packages/absl/app.py", line 299, in run _run_main(main, args) File "/opt/anaconda/envs/newod/lib/python3.6/site-packages/absl/app.py", line 250, in _run_main sys.exit(main(argv)) File "train.py", line 159, in main worker_job_name, is_chief, FLAGS.train_dir) File "/home/sid/models/research/object_detection/trainer.py", line 193, in train detection_model = create_model_fn() File "/home/sid/models-master/research/object_detection/builders/model_builder.py", line 634, in build add_summaries) File "/home/sid/models-master/research/object_detection/builders/model_builder.py", line 245, in _build_ssd_model is_training=is_training) File "/home/sid/models-master/research/object_detection/builders/model_builder.py", line 221, in _build_ssd_feature_extractor return feature_extractor_class(**kwargs) File "/home/sid/models-master/research/object_detection/models/ssd_inception_v2_feature_extractor.py", line 78, in __init__ raise ValueError('SSD Inception V2 feature extractor always uses' ValueError: SSD Inception V2 feature extractor always usesscope returned by `conv_hyperparams_fn` for both the base feature extractor and the additional layers added since there is no arg_scope defined for the base feature extractor.
Have you checked out commit I described?
Yes @Tantael
@siddas27 Did you solve the issue??
I want to run inference using C++ sampleUffSSD.cpp in TensorRT. Any solutions to this??
System information
Describe the problem
Describe the problem clearly here. Be sure to convey here why it's a bug in TensorFlow or a feature request.
These are the exact steps I am following:
ssd_inception_v2
model with the following command:The above command generates the frozen graph format of the exported model. The issue is - The custom_ssd_inception.tar.gz is way different from what is given in the ssd_inception_v2_coco_2018_01_28.tar.gz file in model zoo of object detection when visualized using tensorboard.
Source code / logs
Include any logs or source code that would be helpful to diagnose the problem. If including tracebacks, please include the full traceback. Large logs and files should be attached. Try to provide a reproducible test case that is the bare minimum necessary to generate the problem.
Link to the config file: ssd_inception_v2_coco.config
When continued to convert the frozen graph to UFF using convert_to_uff.py, the output log files are as follows: For the original model, given in model zoo: original-output.log For custom model: custom-output.log
Can someone help me identify the exact issue? Where am I going wrong?