tensorflow / models

Models and examples built with TensorFlow
Other
77.03k stars 45.77k forks source link

tensorflow object detection 1.15 to 2.0 --> " RuntimeError: tf.placeholder() is not compatible with eager execution. #8844

Closed anmol101093AIML closed 4 years ago

anmol101093AIML commented 4 years ago

Dear,

Recently, i was migrating to tensorflow object detetction API version 2.0 from 1.15.0, Able to train & Evaluate the model. But while creating a frozen model using export_inference_graph.py getting below error-

Traceback (most recent call last): File "export_inference_graph.py", line 206, in tf.app.run() File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/platform/app.py", line 40, in run _run(main=main, argv=argv, flags_parser=_parse_flags_tolerate_undef) File "/usr/local/lib/python3.6/dist-packages/absl/app.py", line 299, in run _run_main(main, args) File "/usr/local/lib/python3.6/dist-packages/absl/app.py", line 250, in _run_main sys.exit(main(argv)) File "export_inference_graph.py", line 202, in main side_input_types=side_input_types) File "/usr/local/lib/python3.6/dist-packages/object_detection/exporter.py", line 625, in export_inference_graph side_input_types=side_input_types) File "/usr/local/lib/python3.6/dist-packages/object_detection/exporter.py", line 512, in _export_inference_graph side_input_types=side_input_types) File "/usr/local/lib/python3.6/dist-packages/object_detection/exporter.py", line 458, in build_detection_graph placeholder_args) File "/usr/local/lib/python3.6/dist-packages/object_detection/exporter.py", line 187, in _image_tensor_input_placeholder dtype=tf.uint8, shape=input_shape, name='image_tensor') File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/array_ops.py", line 3023, in placeholder raise Traceback (most recent call last): File "export_inference_graph.py", line 206, in tf.app.run() File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/platform/app.py", line 40, in run _run(main=main, argv=argv, flags_parser=_parse_flags_tolerate_undef) File "/usr/local/lib/python3.6/dist-packages/absl/app.py", line 299, in run _run_main(main, args) File "/usr/local/lib/python3.6/dist-packages/absl/app.py", line 250, in _run_main sys.exit(main(argv)) File "export_inference_graph.py", line 202, in main side_input_types=side_input_types) File "/usr/local/lib/python3.6/dist-packages/object_detection/exporter.py", line 625, in export_inference_graph side_input_types=side_input_types) File "/usr/local/lib/python3.6/dist-packages/object_detection/exporter.py", line 512, in _export_inference_graph side_input_types=side_input_types) File "/usr/local/lib/python3.6/dist-packages/object_detection/exporter.py", line 458, in build_detection_graph placeholder_args) File "/usr/local/lib/python3.6/dist-packages/object_detection/exporter.py", line 187, in _image_tensor_input_placeholder dtype=tf.uint8, shape=input_shape, name='image_tensor') File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/array_ops.py", line 3023, in placeholder raise RuntimeError("tf.placeholder() is not compatible with " RuntimeError: tf.placeholder() is not compatible with eager execution.

anmol101093AIML commented 4 years ago

Fixed the Raised issue by disabling eager execution , Now getting new error.

File "export_inference_graph.py", line 206, in tf.app.run() File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/platform/app.py", line 40, in run _run(main=main, argv=argv, flags_parser=_parse_flags_tolerate_undef) File "/usr/local/lib/python3.6/dist-packages/absl/app.py", line 299, in run _run_main(main, args) File "/usr/local/lib/python3.6/dist-packages/absl/app.py", line 250, in _run_main sys.exit(main(argv)) File "export_inference_graph.py", line 202, in main side_input_types=side_input_types) File "/usr/local/lib/python3.6/dist-packages/object_detection/exporter.py", line 625, in export_inference_graph side_input_types=side_input_types) File "/usr/local/lib/python3.6/dist-packages/object_detection/exporter.py", line 514, in _export_inference_graph profile_inference_graph(tf.get_default_graph()) File "/usr/local/lib/python3.6/dist-packages/object_detection/exporter.py", line 642, in profile_inference_graph contrib_tfprof.model_analyzer.TRAINABLE_VARS_PARAMS_STAT_OPTIONS) NameError: name 'contrib_tfprof' is not defined

and did some research and come to know contrib module is not supported by tensorflow 2.0. Actually, i am stuck now how can i save my trained object detection model on 2.0 and use it for prediction...Please help...!!!!

MaxiLibrandi commented 4 years ago

Can you solved it? I'm stuck at same point now!

syiming commented 4 years ago

Hi, Can I ask which binary are you using? For tf2, model_main_tf2.py should be used. Thanks!

MaxiLibrandi commented 4 years ago

The error is raised when you run export_inference_graph.py file.

03vmate commented 4 years ago

Issue also present in export_tflite_ssd_graph.py/export_tflite_ssd_graph_lib.py

03vmate commented 4 years ago

Disabling eager execution in export_tflite_ssd_graph.py results in a similar error: File "/home/mate/venv/tensorflow2.2/lib/python3.7/site-packages/object_detection/exporter.py", line 145, in rewrite_nn_resize_op while remove_nn(): File "/home/mate/venv/tensorflow2.2/lib/python3.7/site-packages/object_detection/exporter.py", line 100, in remove_nn input_pattern = graph_matcher.OpTypePattern( NameError: name 'graph_matcher' is not defined

pkulzc commented 4 years ago

placeholder and frozengraph are all TF1 features. In TF2 you should use SavedModel. See exporter_main_v2.py

hasansalimkanmaz commented 4 years ago

placeholder and frozengraph are all TF1 features. In TF2 you should use SavedModel. See exporter_main_v2.py

I would like to see this info on this page under guides. https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/tf2.md

Dhivya-rav commented 4 years ago

I have the same issue with export_inference_graph.py. Model trained usign model_main_tf2.py. When I disable eager execution, the new error is same as the one described by the original author. (NameError: name 'contrib_tfprof' is not defined)

I tried using exporter_main_v2.py , this raises an assertion error.

hasansalimkanmaz commented 4 years ago

I also came up with an error after using exporter_main_v2.py. Pls, See the issue on the link.

https://github.com/tensorflow/models/issues/8886#issuecomment-659921958

change2014 commented 4 years ago

use exporter_main_v2.py script only, for exporting the models. By default it exports recent checkpoint present in checkpoint_dir. If you want it to export a particular checkpoint then change model_checkpoint_path in checkpoint file.

vis7 commented 4 years ago

I have the same issue with export_inference_graph.py. Model trained usign model_main_tf2.py. When I disable eager execution, the new error is same as the one described by the original author. (NameError: name 'contrib_tfprof' is not defined)

I tried using exporter_main_v2.py , this raises an assertion error.

I had same error.

When we export the model we need to specify path of directory of chekpoint not path of chekpoint in trained_checkpoint_dir in tfv2

python .\exporter_main_v2.py --input_type image_tensor --pipeline_config_path .\models\my_efficientdet_d1\pipeline.config --trained_checkpoint_dir .\models\my_efficientdet_d1\ --output_directory .\exported-models\my_model

This still does not give me model in output_directory. I am unable to find reason why ?

aLLLiyyy commented 4 years ago

i came across the same error when i was using this file but when i use exporter_main_v2.py it seems okay

savadortades commented 3 years ago

if only the exporter_main_v2.py should be used with tf2, why are they not making an official statement on their webpage? beside that, does it means it won't be possible to get an inference frozen graph pb file with tensorflow2 anymore? if so how do they get the frozengraph on the model zoo with tf2? did anyone find something on this issue? I'm also stock at the same point with xport_inference_graph.py

MattVoge commented 3 years ago

The exporter_main_v2 does NOT allow to export frozen graph. The output is always a .pb proto plus a checkpoint. If I want to export a frozen graph which includes all weights and stuff of the recent checkpoint, how should I proceed if export_inference_graph obviously isn't working?

devloper13 commented 3 years ago

placeholder and frozengraph are all TF1 features. In TF2 you should use SavedModel. See exporter_main_v2.py

On using SavedModel for inference, it also causes an error. Is there a TF2 specific inference script?

shamilam commented 3 years ago

On using SavedModel for inference, it also causes an error. Is there a TF2 specific inference script?

I tried this and it works! https://tensorflow-object-detection-api-tutorial.readthedocs.io/en/latest/auto_examples/plot_object_detection_saved_model.html

JJonahJson commented 3 years ago

placeholder and frozengraph are all TF1 features. In TF2 you should use SavedModel. See exporter_main_v2.py

@pkulzc The problem is that currently the TF2 installation for C ++ is far from working, and the OpenCV method readNetFromTensorFlow requires the frozen inference graph and the pbtxt file that TF1 provided. There are a lot of workarounds suggested by many users but no one works, at least for the Mask R-CNN Inception Resnet V2 model. Have you got any suggestion on how to proceed? Do you have any insight about future releases of TensorFlow that will solve these problems?

AI-P-K commented 3 years ago

Did you guys succeeded in solving this problem? I have trained Mask RCNN with TF2 Object Detection API and now i want to do inference with OpenCV method readNetFromTensorFlow. I get both the errors stated above 'RuntimeError: tf.placeholder() is not compatible with eager execution' and after disabling eager execution i get 'NameError: name 'contrib_tfprof' is not defined'. Anyone resolved the issue?

diego200052 commented 2 years ago

I have trained a custom model with TF2 Object Detection API (using this guide). I would like to obtain the frozen inference graph, i tried running the script export_inference_graph.py but i get the same error as the other users describe:

Traceback (most recent call last):
  File "export_inference_graph.py", line 206, in <module>
    tf.app.run()
  File "anaconda3\envs\tensorflow\lib\site-packages\tensorflow\python\platform\app.py", line 40, in run
    _run(main=main, argv=argv, flags_parser=_parse_flags_tolerate_undef)
  File "anaconda3\envs\tensorflow\lib\site-packages\absl\app.py", line 303, in run
    _run_main(main, args)
  File "anaconda3\envs\tensorflow\lib\site-packages\absl\app.py", line 251, in _run_main
    sys.exit(main(argv))
  File "export_inference_graph.py", line 194, in main
    exporter.export_inference_graph(
  File "anaconda3\envs\tensorflow\lib\site-packages\object_detection\exporter.py", line 611, in export_inference_graph
    _export_inference_graph(
  File "anaconda3\envs\tensorflow\lib\site-packages\object_detection\exporter.py", line 503, in _export_inference_graph
    outputs, placeholder_tensor_dict = build_detection_graph(
  File "anaconda3\envs\tensorflow\lib\site-packages\object_detection\exporter.py", line 457, in build_detection_graph
    placeholder_tensor, input_tensors = input_placeholder_fn_map[input_type](
  File "anaconda3\envs\tensorflow\lib\site-packages\object_detection\exporter.py", line 186, in _image_tensor_input_placeholder
    input_tensor = tf.placeholder(
  File "anaconda3\envs\tensorflow\lib\site-packages\tensorflow\python\ops\array_ops.py", line 3286, in placeholder
    raise RuntimeError("tf.placeholder() is not compatible with "
RuntimeError: tf.placeholder() is not compatible with eager execution.

whih this line of code:

tf.compat.v1.disable_eager_execution()

inside that script and now the error was:

Traceback (most recent call last):
  File "export_inference_graph.py", line 207, in <module>
    tf.app.run()
  File "anaconda3\envs\tensorflow\lib\site-packages\tensorflow\python\platform\app.py", line 40, in run
    _run(main=main, argv=argv, flags_parser=_parse_flags_tolerate_undef)
  File "anaconda3\envs\tensorflow\lib\site-packages\absl\app.py", line 303, in run
    _run_main(main, args)
  File "anaconda3\envs\tensorflow\lib\site-packages\absl\app.py", line 251, in _run_main
    sys.exit(main(argv))
  File "export_inference_graph.py", line 195, in main
    exporter.export_inference_graph(
  File "anaconda3\envs\tensorflow\lib\site-packages\object_detection\exporter.py", line 611, in export_inference_graph
    _export_inference_graph(
  File "anaconda3\envs\tensorflow\lib\site-packages\object_detection\exporter.py", line 514, in _export_inference_graph
    profile_inference_graph(tf.get_default_graph())
  File "anaconda3\envs\tensorflow\lib\site-packages\object_detection\exporter.py", line 642, in profile_inference_graph
    contrib_tfprof.model_analyzer.TRAINABLE_VARS_PARAMS_STAT_OPTIONS)
NameError: name 'contrib_tfprof' is not defined
AI-P-K commented 2 years ago

@diego200052 your purpose is to make the inference faster right? i can't believe that thay did not solved this issue yet :(

diego200052 commented 2 years ago

@diego200052 your purpose is to make the inference faster right? i can't believe that thay did not solved this issue yet :(

Exactly, the only solution that I found is to use TF1 to train the model and then get the freeze graph. Here there is a related issue.

AI-P-K commented 2 years ago

Or you could just dump TF and go to PyTorch.. as I did. Easier, faster, better :)\

IamSierraCharlie commented 2 years ago

I'm in the same boat. After hours of investigation, Ive arrived exactly at the same point as others here. Historically in TF1, I used the frozen inference graph - created a session and then pumped images through a placeholder with great speed because my project demanded it.. It seems that TF2 has no mechanism for this? Please correct / direct me if I am wrong. Either that or I am looking in the wrong place for info. My issue is 2 fold. I've ported my python code for doing the above to c# and it works, but needs a frozen inference graph - the old models are not overly compatible (thats another story) and I've gone back to build new ones only to find that TF2 seems to be the only way forward. That is fine to the point that I go through the process of creating the model and saving it, but no frozen inference graph and no well supported way to create one (old info refers to deprecated functions). So whats the way forward? I cant find a way to make TF2 do what I need and I can't save the saved model to frozen_inference_graph.pb. I dont really want to change - that would mean a whole redesign of my project back end; I'd much prefer to stick with TensorFlow as I'm just starting to get a handle on it. Surely there must be heaps of people who need this functionality still? Any alternatives? I cant find a way around using the frozen_inference_graph.pb

AI-P-K commented 2 years ago

You are not wrong... I investigated this problem for about 2 weeks, I have tried different ways but unsuccessfully. I was working on a real-life project as you probably are and I had to take the same decision ... I changed to PyTorch. Even if this meant I kind of threw away 4 months of work. If you ever find another alternative just give us a shout... I am still very curious about this problem.

diego200052 commented 2 years ago

@AI-P-K thank you for the advice I'll try with PyTorch.

IamSierraCharlie commented 2 years ago

I actually think I could be mistaken. I spent hours last night trolling the internet and I found this page: https://tensorflow-object-detection-api-tutorial.readthedocs.io/en/latest/auto_examples/plot_object_detection_saved_model.html At first glance, I thought I'd seen it all before and is not what I'm looking for because it doesnt use a frozen inference graph. On second glance and with nothing to lose, i decided to download it and fire it up in python.

At first glance, it looks like it is very slow, but then I removed the model download and substituted with my own model, also using my own labels and my own images local to my pc.

Once the model is loaded (yes this is a TF2 saved model without a frozen inference graph), there is no need for place holder or a frozen graph or session. Tensorflow will still pump your images through as quickly as you can thow them at it. I've done some basic performance testing on this:

here is the code the iterates through the images after model is loaded (I am using CV for the image show instead of PLT):

for image_path in IMAGE_PATHS:
    start_time = time.time()
    print('Running inference for {}... '.format(image_path), end='')
    image_np = load_image_into_numpy_array(image_path)
    input_tensor = tf.convert_to_tensor(image_np)
    input_tensor = input_tensor[tf.newaxis, ...]
    detections = detect_fn(input_tensor)
    num_detections = int(detections.pop('num_detections'))
    detections = {key: value[0, :num_detections].numpy()
                   for key, value in detections.items()}
    detections['num_detections'] = num_detections
    detections['detection_classes'] = detections['detection_classes'].astype(np.int64)
    image_np_with_detections = image_np.copy()
    viz_utils.visualize_boxes_and_labels_on_image_array(
          image_np_with_detections,
          detections['detection_boxes'],
          detections['detection_classes'],
          detections['detection_scores'],
          category_index,
          use_normalized_coordinates=True,
          max_boxes_to_draw=200,
          min_score_thresh=.30,
          agnostic_mode=False)

    converted = cv2.cvtColor(image_np_with_detections, cv2.COLOR_BGR2RGB)

    cv2.imshow("image", converted)
    cv2.waitKey(1)
    cv2.destroyWindow("image")
    #plt.figure()
    #plt.imshow(image_np_with_detections)
    #plt.show()
    print('Donite')
    end_time = time.time()
    elapsed_time = end_time - start_time
    print('Done! Took {} seconds'.format(elapsed_time))

Here is the output: Running inference for D:\New folder\OutOfRange_41_IH0N.JPEG... Donite Done! Took 0.06303668022155762 seconds Running inference for D:\New folder\OutOfRange_41_YF2R.JPEG... Donite Done! Took 0.060837745666503906 seconds Running inference for D:\New folder\OutOfRange_43_9LT0.JPEG... Donite Done! Took 0.06283187866210938 seconds Running inference for D:\New folder\OutOfRange_44_0USK.JPEG... Donite Done! Took 0.06283164024353027 seconds Running inference for D:\New folder\OutOfRange_44_77RU.JPEG... Donite Done! Took 0.06238198280334473 seconds Running inference for D:\New folder\OutOfRange_44_EXNN.JPEG... Donite Done! Took 0.06338620185852051 seconds Running inference for D:\New folder\OutOfRange_45_KP7U.JPEG... Donite Done! Took 0.061758995056152344 seconds Running inference for D:\New folder\OutOfRange_47_F42H.JPEG... Donite Done! Took 0.05983614921569824 seconds Running inference for D:\New folder\OutOfRange_47_LN9G.JPEG... Donite Done! Took 0.06283187866210938 seconds Running inference for D:\New folder\OutOfRange_48_H6OG.JPEG... Donite Done! Took 0.06482648849487305 seconds Running inference for D:\New folder\OutOfRange_48_I58M.JPEG... Donite Done! Took 0.06482696533203125 seconds Running inference for D:\New folder\OutOfRange_49_WXM2.JPEG... Donite Done! Took 0.0608365535736084 seconds Running inference for D:\New folder\OutOfRange_4_4P22.JPEG... Donite Done! Took 0.07679414749145508 seconds Running inference for D:\New folder\OutOfRange_50_63VV.JPEG... Donite Done! Took 0.06283259391784668 seconds Running inference for D:\New folder\OutOfRange_50_Y7P6.JPEG... Donite Done! Took 0.06183433532714844 seconds Running inference for D:\New folder\OutOfRange_51_25QI.JPEG... Donite Done! Took 0.06482720375061035 seconds Running inference for D:\New folder\OutOfRange_51_BXGV.JPEG... Donite Done! Took 0.06183433532714844 seconds Running inference for D:\New folder\OutOfRange_51_VEJE.JPEG... Donite Done! Took 0.06382942199707031 seconds Running inference for D:\New folder\OutOfRange_55_2UAV.JPEG... Donite Done! Took 0.060837745666503906 seconds Running inference for D:\New folder\OutOfRange_55_8978.JPEG... Donite Done! Took 0.06283164024353027 seconds Running inference for D:\New folder\OutOfRange_56_27WU.JPEG... Donite Done! Took 0.06209230422973633 seconds Running inference for D:\New folder\OutOfRange_56_FZWV.JPEG... Donite Done! Took 0.06549406051635742 seconds Running inference for D:\New folder\OutOfRange_56_WY1O.JPEG... Donite Done! Took 0.06216025352478027 seconds Running inference for D:\New folder\OutOfRange_57_5K7E.JPEG... Donite Done! Took 0.060288190841674805 seconds Running inference for D:\New folder\OutOfRange_57_AWSG.JPEG... Donite Done! Took 0.06382942199707031 seconds Running inference for D:\New folder\OutOfRange_58_99MM.JPEG... Donite Done! Took 0.06209993362426758 seconds Running inference for D:\New folder\OutOfRange_58_LVBE.JPEG... Donite Done! Took 0.0620265007019043 seconds Running inference for D:\New folder\OutOfRange_58_M6IW.JPEG... Donite Done! Took 0.0638432502746582 seconds Running inference for D:\New folder\OutOfRange_60_NL6O.JPEG... Donite Done! Took 0.06242775917053223 seconds Running inference for D:\New folder\OutOfRange_61_GU4B.JPEG... Donite Done! Took 0.06242632865905762 seconds Running inference for D:\New folder\OutOfRange_61_J71B.JPEG... Donite Done! Took 0.06180715560913086 seconds Running inference for D:\New folder\OutOfRange_62_0W6Z.JPEG... Donite Done! Took 0.06387948989868164 seconds Running inference for D:\New folder\OutOfRange_63_PIMQ.JPEG... Donite Done! Took 0.06083822250366211 seconds Running inference for D:\New folder\OutOfRange_64_420M.JPEG... Donite Done! Took 0.06482553482055664 seconds Running inference for D:\New folder\OutOfRange_65_KPGT.JPEG... Donite Done! Took 0.06287264823913574 seconds Running inference for D:\New folder\OutOfRange_66_78DC.JPEG... Donite Done! Took 0.06083798408508301 seconds Running inference for D:\New folder\OutOfRange_66_SZLZ.JPEG... Donite Done! Took 0.06493496894836426 seconds Running inference for D:\New folder\OutOfRange_67_MS8U.JPEG... Donite Done! Took 0.06083846092224121 seconds Running inference for D:\New folder\OutOfRange_68_5BYK.JPEG... Donite Done! Took 0.046874046325683594 seconds Running inference for D:\New folder\OutOfRange_68_QLED.JPEG... Donite Done! Took 0.06382966041564941 seconds Running inference for D:\New folder\OutOfRange_6_LUK6.JPEG... Donite Done! Took 0.06183433532714844 seconds Running inference for D:\New folder\OutOfRange_70_3FRN.JPEG... Donite Done! Took 0.06254291534423828 seconds Running inference for D:\New folder\OutOfRange_71_ZM21.JPEG... Donite Done! Took 0.07995748519897461 seconds Running inference for D:\New folder\OutOfRange_72_QL67.JPEG... Donite Done! Took 0.06341218948364258 seconds Running inference for D:\New folder\OutOfRange_73_NGAW.JPEG... Donite Done! Took 0.06205391883850098 seconds Running inference for D:\New folder\OutOfRange_75_TJ2O.JPEG... Donite Done! Took 0.06285691261291504 seconds Running inference for D:\New folder\OutOfRange_76_BOSX.JPEG... Donite Done! Took 0.06280946731567383 seconds Running inference for D:\New folder\OutOfRange_76_CYH5.JPEG... Donite Done! Took 0.06382894515991211 seconds Running inference for D:\New folder\OutOfRange_77_2PEO.JPEG... Donite Done! Took 0.06183505058288574 seconds Running inference for D:\New folder\OutOfRange_77_N3JQ.JPEG... Donite Done! Took 0.06382894515991211 seconds Running inference for D:\New folder\OutOfRange_77_WQ73.JPEG... Donite Done! Took 0.0628044605255127 seconds Running inference for D:\New folder\OutOfRange_78_MS6F.JPEG... Donite Done! Took 0.06325721740722656 seconds Running inference for D:\New folder\OutOfRange_80_7BZZ.JPEG... Donite Done! Took 0.0608372688293457 seconds Running inference for D:\New folder\OutOfRange_80_NVPV.JPEG... Donite Done! Took 0.0465388298034668 seconds Running inference for D:\New folder\OutOfRange_80_W1VG.JPEG... Donite Done! Took 0.07277727127075195 seconds Running inference for D:\New folder\OutOfRange_81_10TN.JPEG... Donite Done! Took 0.05223202705383301 seconds Running inference for D:\New folder\OutOfRange_81_QS78.JPEG... Donite Done! Took 0.06382966041564941 seconds Running inference for D:\New folder\OutOfRange_84_3TY6.JPEG... Donite Done! Took 0.06339073181152344 seconds Running inference for D:\New folder\OutOfRange_85_K6TE.JPEG... Donite Done! Took 0.06297779083251953 seconds Running inference for D:\New folder\OutOfRange_86_DHFF.JPEG... Donite Done! Took 0.06222391128540039 seconds Running inference for D:\New folder\OutOfRange_86_QG53.JPEG... Donite Done! Took 0.07757568359375 seconds Running inference for D:\New folder\OutOfRange_87_2Q7L.JPEG... Donite Done! Took 0.06380581855773926 seconds Running inference for D:\New folder\OutOfRange_87_IDB3.JPEG... Donite Done! Took 0.06453967094421387 seconds Running inference for D:\New folder\OutOfRange_88_KJF6.JPEG... Donite Done! Took 0.05983996391296387 seconds Running inference for D:\New folder\OutOfRange_88_N7JM.JPEG... Donite Done! Took 0.06382489204406738 seconds Running inference for D:\New folder\OutOfRange_89_5SLN.JPEG... Donite Done! Took 0.06484174728393555 seconds Running inference for D:\New folder\OutOfRange_8_GFG7.JPEG... Donite Done! Took 0.0608372688293457 seconds Running inference for D:\New folder\OutOfRange_90_1JIA.JPEG... Donite Done! Took 0.06232094764709473 seconds Running inference for D:\New folder\OutOfRange_91_V959.JPEG... Donite Done! Took 0.06283783912658691 seconds Running inference for D:\New folder\OutOfRange_92_HF04.JPEG... Donite Done! Took 0.04679703712463379 seconds Running inference for D:\New folder\OutOfRange_93_SOLG.JPEG... Donite Done! Took 0.06582403182983398 seconds Running inference for D:\New folder\OutOfRange_93_T868.JPEG... Donite Done! Took 0.06183433532714844 seconds Running inference for D:\New folder\OutOfRange_94_6EEI.JPEG... Donite Done! Took 0.06183505058288574 seconds Running inference for D:\New folder\OutOfRange_94_IGWB.JPEG... Donite Done! Took 0.06381535530090332 seconds Running inference for D:\New folder\OutOfRange_95_GEM7.JPEG... Donite Done! Took 0.06209826469421387 seconds Running inference for D:\New folder\OutOfRange_95_ZBRB.JPEG... Donite Done! Took 0.06455802917480469 seconds Running inference for D:\New folder\OutOfRange_97_QTP9.JPEG... Donite Done! Took 0.06283211708068848 seconds Running inference for D:\New folder\OutOfRange_99_NGCO.JPEG... Donite Done! Took 0.06069517135620117 seconds Running inference for D:\New folder\OutOfRange_9_3521.JPEG... Donite Done! Took 0.06325459480285645 seconds Running inference for D:\New folder\OutOfRange_9_83ND.JPEG... Donite Done! Took 0.06377243995666504 seconds Running inference for D:\New folder\OutOfRange_9_HA6J.JPEG... Donite Done! Took 0.059606313705444336 seconds Running inference for D:\New folder\OutOfRange_9_NCN2.JPEG... Donite Done! Took 0.06479787826538086 seconds Running inference for D:\New folder\OutOfRange_9_W0IR.JPEG... Donite Done! Took 0.06387996673583984 seconds Running inference for D:\New folder\OutOfRange_9_WAB2.JPEG... Donite Done! Took 0.060263633728027344 seconds Running inference for D:\New folder\OutOfRange_9_XR0W.JPEG... Donite Done! Took 0.06504368782043457 seconds

This runs so fast CV doesnt have time to render the images. The point being that not having a frozen inference graph is not the bottle neck I thought it was.

detect_fn = tf.saved_model.load(PATH_TO_SAVED_MODEL)

and then do

detections = detect_fn(input_tensor)

The detect function is fundamentally one line of code ??

Clearly this is only my use case and is probably different to yours, I do accept that working with CV has more issues than the one I am pointing out here, but I hope it might help someone else on their way to object detection with TF2. I still need to get this working in c# :-S

Im-JimmyHu commented 2 years ago

@AI-P-K any progress? i have to get the frozen_pb to transfer to other archtiche,I met the same error above.once trying training with tf1,but my gpu is rtx30s,which doesn't support the tf1.in addtion,i try to ckpt to h5 ,then h5 to frozen_pb,but some nodes r unsupported.more important,i can only use tf. any advice will be appreaciated!

AI-P-K commented 2 years ago

@JimmyHu1107 it looks like @IamSierraCharlie solved the problem above. I have left TF as i was not able to solve the issue myself.

IamSierraCharlie commented 2 years ago

In the end, my use case did not require me to use a frozen inference graph. The biggest issue for me was that I started with Tensorflow a long time back. In between putting my project down and coming back to it, Tensorflow 2.x was much more mature and many functions in 1.x seem to be deprecated. I trolled the internet to try and find information on how to create a frozen inference graph from the new tf 2.x model. All of the things that were suggested either resulted in errors or produced a graph, but it simply did not work as expected. I thought initially that not having a frozen inference graph would have massive implications on the time it took to process my images, but it tutnrd out that was not the case. TF 2 runs as fast if not faster than TF 1's frozen inference graph implementation. With that in mind, that is where I am now - I no longer use the frozen_pb. Unless you have a genuine reason to continue with frozen_pb, then you should move to TF2. This thread is testament to the fact that there still are use cases for it, but I've moved on to TF2 for my needs. After this long, I cannot seee the developers circiling around to address this issue.

Im-JimmyHu commented 2 years ago

@JimmyHu1107 it looks like @IamSierraCharlie solved the problem above. I have left TF as i was not able to solve the issue myself.

sad to hear that,but tks 4 your kind reply @IamSierraCharlie