tensorflow / models

Models and examples built with TensorFlow
Other
76.98k stars 45.79k forks source link

Object Detection with AttributeError: module 'tensorflow' has no attribute 'GraphDef' in TF 2.x #7703

Open janardana-raj1901 opened 4 years ago

janardana-raj1901 commented 4 years ago

Image 5

Please do rectify it...

janardana-raj1901 commented 4 years ago

Image 5

Please do rectify it...

saberkun commented 4 years ago

Hi, what TF version do you use?

ImanTech commented 4 years ago

I have the same issue! TF version 2.

lixiang-robot commented 4 years ago

Same here! TF Version 2.0.0

saberkun commented 4 years ago

Hi Guys, I think all object detection notebooks and models have not been verified with TF 2.0. For example, the session will be implicit in TF2. Would you please try tf.disable_v2_behavior() + import tensorflow.compat.v1 as tf?

milanlanlan commented 4 years ago

python -m pip install tensorflow==1.14 will help. Of course if u want to use TF2, u need to update the code

saberkun commented 4 years ago

Yes, there might be some symbols disappeared in TF2 default behavior. Using 1.15 should be fine. I am just curious that if using TF 2.x in compat mode: tf.disable_v2_behavior() + import tensorflow.compat.v1 as tf will still work. In such case, you don't need to install 2 versions.

hdmthao commented 4 years ago

Yes, there might be some symbols disappeared in TF2 default behavior. Using 1.15 should be fine. I am just curious that if using TF 2.x in compat mode: tf.disable_v2_behavior() + import tensorflow.compat.v1 as tf will still work. In such case, you don't need to install 2 versions.

I tested using: import tensorflow.compat.v1 as tf. And it work.

saberkun commented 4 years ago

Thanks! I update the title of this issue so that everyone can notice this to handle compatibility issue with TF 2.x. In theory, compat mode will be fine if there is no tf.contrib. TF 2 or TF 1.x share the same underlying TF runtime.

kaeskaeshaan commented 4 years ago

i got the same error

ericjiang18 commented 4 years ago

same error :(

ghost commented 4 years ago

od_graph_def = tf.compat.v1.GraphDef() works for me, but... tf.compat.v1.io.gfile.GFile() does not work! gfile is moved to tf.io in tensorflow-2., but how to import this?

hdmthao commented 4 years ago

od_graph_def = tf.compat.v1.GraphDef() works for me, but... tf.compat.v1.io.gfile.GFile() does not work! gfile is moved to tf.io in tensorflow-2., but how to import this?

You just need tf.io.gfile.GFile()

ghost commented 4 years ago

PATH_TO_FROZEN_GRAPH = 'D:/models/research/object_detection/' + MODEL_NAME + '/frozen_inference_graph.pb'
with tf.io.gfile.GFile()(PATH_TO_FROZEN_GRAPH, 'rb') as fid: TypeError: init() missing 1 required positional argument: 'name'

Edit: The Solution with tf.io.gfile.GFile(PATH_TO_FROZEN_GRAPH, 'rb') as fid:

Asanka25 commented 4 years ago

First check your tensorflow version by running bellow code

import tensorflow as tf print(tf.version)

If it's 2 or above then pip install tensorflow==1.14.0 will run code successfully.

abhi07sh commented 4 years ago

First check your tensorflow version by running bellow code

import tensorflow as tf print(tf.version)

If it's 2 or above then pip install tensorflow==1.14.0 will run code successfully.

After installing tf1.14.0 I am getting another error(FutureWarning) for which i need to install tf1.20

bigswede74 commented 4 years ago

I have downgraded my tensorflow-gpu==1.14.0 and I still get these error when running the object_detection scripts.

from object_detection.utils import label_map_util
ModuleNotFoundError: No module named object_detection
kbaxx commented 4 years ago

I have downgraded my tensorflow-gpu==1.14.0 and I still get these error when running the object_detection scripts.

from object_detection.utils import label_map_util
ModuleNotFoundError: No module named object_detection

If you are using tensorflow version < 2 you have to rewrite the utils imports to: from utils import label_map_util

bigswede74 commented 4 years ago

Once I added the PYTHONPATH environment variable to my system and added all the tensorflow repo local paths to the variable the code was able to see the references. Evidently Python will only look for script references using the PYTHONPATH env variable.

jenniferchiang commented 4 years ago

Hi Guys, I think all object detection notebooks and models have not been verified with TF 2.0. For example, the session will be implicit in TF2. Would you please try tf.disable_v2_behavior() + import tensorflow.compat.v1 as tf?

Thanks a lot! It works!

afvoskeuil commented 4 years ago

Hii, Im having the same issue and nothing seems to work.. The error that I get is:

2020-01-20 02:45:45.916084: W tensorflow/stream_executor/platform/default/dso_loader.cc:55] Could not load dynamic library 'cudart64_101.dll'; dlerror: cudart64_101.dll not found 2020-01-20 02:45:45.916282: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine. Traceback (most recent call last): File "C:/Users/frevo/PycharmProjects/CameraTracking/models/object_detection/object_detection_tutorial_CONVERTED.py", line 75, in od_graph_def = tf.GraphDef() AttributeError: module 'tensorflow' has no attribute 'GraphDef'

This is my code:

import numpy as np

import os
import six.moves.urllib as urllib
import sys
import tarfile
import tensorflow as tf
import zipfile

from collections import defaultdict
from io import StringIO
from matplotlib import pyplot as plt
from PIL import Image

import cv2
cap = cv2.VideoCapture(1)

# This is needed since the notebook is stored in the object_detection folder.
sys.path.append("..")

# ## Object detection imports
# Here are the imports from the object detection module.

# In[3]:

from utils import label_map_util

from utils import visualization_utils as vis_util

# # Model preparation

# ## Variables
#
# Any model exported using the `export_inference_graph.py` tool can be loaded here simply by changing `PATH_TO_CKPT` to point to a new .pb file.
#
# By default we use an "SSD with Mobilenet" model here. See the [detection model zoo](https://github.com/tensorflow/models/blob/master/object_detection/g3doc/detection_model_zoo.md) for a list of other models that can be run out-of-the-box with varying speeds and accuracies.

# In[4]:

# What model to download.
MODEL_NAME = 'ssd_mobilenet_v1_coco_11_06_2017'
MODEL_FILE = MODEL_NAME + '.tar.gz'
DOWNLOAD_BASE = 'http://download.tensorflow.org/models/object_detection/'

# Path to frozen detection graph. This is the actual model that is used for the object detection.
PATH_TO_CKPT = MODEL_NAME + '/frozen_inference_graph.pb'

# List of the strings that is used to add correct label for each box.
PATH_TO_LABELS = os.path.join('data', 'mscoco_label_map.pbtxt')

NUM_CLASSES = 90

# ## Download Model

# In[5]:

opener = urllib.request.URLopener()
opener.retrieve(DOWNLOAD_BASE + MODEL_FILE, MODEL_FILE)
tar_file = tarfile.open(MODEL_FILE)
for file in tar_file.getmembers():
  file_name = os.path.basename(file.name)
  if 'frozen_inference_graph.pb' in file_name:
    tar_file.extract(file, os.getcwd())

# ## Load a (frozen) Tensorflow model into memory.

# In[6]:

detection_graph = tf.Graph()
with detection_graph.as_default():
  od_graph_def = tf.GraphDef()
  with tf.gfile.GFile(PATH_TO_CKPT, 'rb') as fid:
    serialized_graph = fid.read()
    od_graph_def.ParseFromString(serialized_graph)
    tf.import_graph_def(od_graph_def, name='')

# ## Loading label map
# Label maps map indices to category names, so that when our convolution network predicts `5`, we know that this corresponds to `airplane`.  Here we use internal utility functions, but anything that returns a dictionary mapping integers to appropriate string labels would be fine

# In[7]:

label_map = label_map_util.load_labelmap(PATH_TO_LABELS)
categories = label_map_util.convert_label_map_to_categories(label_map, max_num_classes=NUM_CLASSES, use_display_name=True)
category_index = label_map_util.create_category_index(categories)

# ## Helper code

# In[8]:

def load_image_into_numpy_array(image):
  (im_width, im_height) = image.size
  return np.array(image.getdata()).reshape(
      (im_height, im_width, 3)).astype(np.uint8)

# # Detection

# In[9]:

# For the sake of simplicity we will use only 2 images:
# image1.jpg
# image2.jpg
# If you want to test the code with your images, just add path to the images to the TEST_IMAGE_PATHS.
PATH_TO_TEST_IMAGES_DIR = 'test_images'
TEST_IMAGE_PATHS = [ os.path.join(PATH_TO_TEST_IMAGES_DIR, 'dollar.jpg'.format(i)) for i in range(1, 3) ]

# Size, in inches, of the output images.
IMAGE_SIZE = (12, 8)

# In[10]:

with detection_graph.as_default():
  with tf.Session(graph=detection_graph) as sess:
    while True:
      ret, image_np = cap.read()
      # Expand dimensions since the model expects images to have shape: [1, None, None, 3]
      image_np_expanded = np.expand_dims(image_np, axis=0)
      image_tensor = detection_graph.get_tensor_by_name('image_tensor:0')
      # Each box represents a part of the image where a particular object was detected.
      boxes = detection_graph.get_tensor_by_name('detection_boxes:0')
      # Each score represent how level of confidence for each of the objects.
      # Score is shown on the result image, together with the class label.
      scores = detection_graph.get_tensor_by_name('detection_scores:0')
      classes = detection_graph.get_tensor_by_name('detection_classes:0')
      num_detections = detection_graph.get_tensor_by_name('num_detections:0')
      # Actual detection.
      (boxes, scores, classes, num_detections) = sess.run(
          [boxes, scores, classes, num_detections],
          feed_dict={image_tensor: image_np_expanded})
      # Visualization of the results of a detection.
      vis_util.visualize_boxes_and_labels_on_image_array(
          image_np,
          np.squeeze(boxes),
          np.squeeze(classes).astype(np.int32),
          np.squeeze(scores),
          category_index,
          use_normalized_coordinates=True,
          line_thickness=8)

      cv2.imshow('object detection', cv2.resize(image_np, (800,600)))
      if cv2.waitKey(25) & 0xFF == ord('q'):
        cv2.destroyAllWindows()
        break

Can someone help me, Im struggling with this for almost 6 hours now :')

ayushmankumar7 commented 4 years ago

The Colab Notebook is working fine. Is this issue supposed to be closed?

AKIvan commented 4 years ago

Any solutions except downgrade the versions on how to wrap this with the new one ? "# ## Load a (frozen) Tensorflow model into memory." - version tf.2 ?

ebron01 commented 4 years ago

updating this two lines: od_graph_def = tf.GraphDef() with tf.gfile.GFile(PATH_TO_CKPT, 'rb') as fid: with this solved the issue for me: od_graph_def = tf.compat.v1.GraphDef() with tf.io.gfile.GFile(PATH_TO_CKPT, 'rb') as fid:

problem occurs from tf1 and v2 issues. with these changes no need to downgrade the tf.

Naeemmariam7 commented 4 years ago

updating this two lines: od_graph_def = tf.GraphDef() with tf.gfile.GFile(PATH_TO_CKPT, 'rb') as fid: with this solved the issue for me: od_graph_def = tf.compat.v1.GraphDef() with tf.io.gfile.GFile(PATH_TO_CKPT, 'rb') as fid:

problem occurs from tf1 and v2 issues. with these changes no need to downgrade the tf.

I tried this but now I get the error:

AttributeError: module 'tensorflow' has no attribute 'gfile'

tanviagwl98 commented 4 years ago

The Colab Notebook is working fine. Is this issue supposed to be closed?

How to upload upload files on google colab to test and train data?

tanviagwl98 commented 4 years ago

od_graph_def = tf.compat.v1.GraphDef() works for me, but... tf.compat.v1.io.gfile.GFile() does not work! gfile is moved to tf.io in tensorflow-2., but how to import this?

You just need tf.io.gfile.GFile()

If this is not working for me, can anybody help?

redsigma commented 4 years ago

After multiple tries. I got it working in this order:

  1. Compile object_detection by running pip3 install . in models/research folder
  2. Rename in/utils/label_map_util.py from object_detection.protos import string_int_label_map_pb2 to from protos import string_int_label_map_pb2
  3. Rename in /utils/label_map_util.py with tf.gfile.GFile(path, 'r') as fid: to with tf.io.gfile.GFile(path, 'r') as fid:
  4. Run jupyter notebook object_detection_tutorial.ipynb

Now inside jupyter:

  1. Rename %matplotlib inline to %matplotlib notebook so it shows the image at the end. For me plt.show() didn't worked but this method worked
  2. Rename od_graph_def = tf.GraphDef() with tf.gfile.GFile(PATH_TO_FROZEN_GRAPH, 'rb') as fid: to od_graph_def = tf.compat.v1.GraphDef() with tf.io.gfile.GFile(PATH_TO_FROZEN_GRAPH, 'rb') as fid:
  3. Rename tf.Session() to tf.compat.v1.Session()
  4. Rename any stuff that contains tf.get_default_graph() to tf.compat.v1.get_default_graph()
  5. It should work now
  6. If the image still doesn't show after the last step is over, run again the second step with matplotlib and then run again the last step and it should appear. Repeat this again if it still doesn't.

Tested on docker image: rocm/tensorflow:latest which uses: tensorboard 2.1.1
tensorflow-estimator 2.1.0
tensorflow-rocm 2.1.0 protobuf 3.11.3
Pillow 7.1.1
pip 20.0.2
jupyter 1.0.0
jupyter-client 6.1.0
jupyter-console 6.1.0
jupyter-core 4.6.3
Keras-Applications 1.0.8
Keras-Preprocessing 1.1.0
matplotlib 3.2.1 kiwisolver 1.2.0
numpy 1.18.2
...

Also my cpu doesn't have AVX support but has SSE 4.1 and SSE 4.2

Output of dpkg -l | grep rocm: ii comgr - 1.6.0.121-rocm-rel-3.1-44-cbb02f9
ii hip-base - 3.1.20086.4516-rocm-rel-3.1-44-8ef00e2d ii hip-doc - 3.1.20086.4516-rocm-rel-3.1-44-8ef00e2d
ii hip-hcc - 3.1.20086.4516-rocm-rel-3.1-44-8ef00e2d
ii hip-samples - 3.1.20086.4516-rocm-rel-3.1-44-8ef00e2d
ii hipblas - 0.20.0.307-rocm-rel-3.1-44-ff35c32
ii hipcub - 2.9.0.92-rocm-rel-3.1-44-40e1d66
ii hsa-ext-rocr-dev - 1.1.30100.0-rocm-rel-3.1-44-ecafeba1
ii hsa-rocr-dev - 1.1.30100.0-rocm-rel-3.1-44-ecafeba1
ii miopen-hip - 2.2.1.7633-rocm-rel-3.1-44-9218683
ii miopengemm - 1.1.6.647-rocm-rel-3.1-44-b51a125
ii rccl - 2.10.0-254-g31648ec-rocm-rel-3.1-44
ii rocblas - 2.14.1.1861-rocm-rel-3.1-44-cc49425
ii rocfft - 0.9.10.783-rocm-rel-3.1-44-b7f9ebe
ii rocm-clang-ocl - 0.5.0.48-rocm-rel-3.1-44-fa039e7
ii rocm-cmake - 0.3.0.141-rocm-rel-3.1-44-1b9e698
ii rocm-debug-agent - 1.0.0
ii rocm-dev - 3.1.44
ii rocm-device-libs - 1.0.0.563-rocm-rel-3.1-44-8f441a8
ii rocm-opencl - 2.0.0-rocm-rel-3.1-44-8f28d95ad
ii rocm-opencl-dev - 2.0.0-rocm-rel-3.1-44-8f28d95ad
ii rocm-smi - 1.0.0-194-rocm-rel-3.1-44-g840011e
ii rocm-smi-lib64 - 2.3.0.3.rocm-rel-3.1-44-a246aac
ii rocm-utils - 3.1.44
ii rocminfo - 1.30100.0
ii rocprim - 2.9.0.952-rocm-rel-3.1-44-5fa0c79
ii rocrand - 2.10.0.657-rocm-rel-3.1-44-448c673 ii rocsparse - 1.8.4.726-rocm-rel-3.1-44-eb854f0

aniiketdongare07 commented 4 years ago

Hi Guys, I think all object detection notebooks and models have not been verified with TF 2.0. For example, the session will be implicit in TF2. Would you please try tf.disable_v2_behavior() + import tensorflow.compat.v1 as tf?

I'm trying to understand what you've said here. My error is the line

output_graph_def = tf.GraphDef()

Will you explain me how and where tf.disable_v2_behavior() + import tensorflow.compat.v1 as tf will work?

saberkun commented 4 years ago

import tensorflow.compat.v1 as tf for all symbols.

tf.compat.v1.disable_v2_behavior() calls inside main. https://www.tensorflow.org/api_docs/python/tf/compat/v1/disable_v2_behavior

aniiketdongare07 commented 4 years ago

@saberkun Thank you, works.

abburimadhukar commented 4 years ago

updating this two lines: od_graph_def = tf.GraphDef() with tf.gfile.GFile(PATH_TO_CKPT, 'rb') as fid: with this solved the issue for me: od_graph_def = tf.compat.v1.GraphDef() with tf.io.gfile.GFile(PATH_TO_CKPT, 'rb') as fid: problem occurs from tf1 and v2 issues. with these changes no need to downgrade the tf.

I tried this but now I get the error:

AttributeError: module 'tensorflow' has no attribute 'gfile'

i am gettingg same error what i have to do

mehmood14 commented 4 years ago

updating this two lines: od_graph_def = tf.GraphDef() with tf.gfile.GFile(PATH_TO_CKPT, 'rb') as fid: with this solved the issue for me: od_graph_def = tf.compat.v1.GraphDef() with tf.io.gfile.GFile(PATH_TO_CKPT, 'rb') as fid:

problem occurs from tf1 and v2 issues. with these changes no need to downgrade the tf.

tried it but same issue

kaizen04 commented 4 years ago

Hey, can anyone tell the final solution for this ??

shashwat1225 commented 4 years ago

PATH_TO_FROZEN_GRAPH = 'D:/models/research/object_detection/' + MODEL_NAME + '/frozen_inference_graph.pb' with tf.io.gfile.GFile()(PATH_TO_FROZEN_GRAPH, 'rb') as fid: TypeError: init() missing 1 required positional argument: 'name'

Edit: The Solution with tf.io.gfile.GFile(PATH_TO_FROZEN_GRAPH, 'rb') as fid:

Still getting this issue. Can someone please help out?

goms12 commented 4 years ago

Capture can anyone help me please? :( i can't uderstand this error

mtreddy commented 4 years ago

After multiple tries. I got it working in this order:

  1. Compile object_detection by running pip3 install . in models/research folder
  2. Rename in/utils/label_map_util.py from object_detection.protos import string_int_label_map_pb2 to from protos import string_int_label_map_pb2
  3. Rename in /utils/label_map_util.py with tf.gfile.GFile(path, 'r') as fid: to with tf.io.gfile.GFile(path, 'r') as fid:
  4. Run jupyter notebook object_detection_tutorial.ipynb

Now inside jupyter:

  1. Rename %matplotlib inline to %matplotlib notebook so it shows the image at the end. For me plt.show() didn't worked but this method worked
  2. Rename od_graph_def = tf.GraphDef() with tf.gfile.GFile(PATH_TO_FROZEN_GRAPH, 'rb') as fid: to od_graph_def = tf.compat.v1.GraphDef() with tf.io.gfile.GFile(PATH_TO_FROZEN_GRAPH, 'rb') as fid:
  3. Rename tf.Session() to tf.compat.v1.Session()
  4. Rename any stuff that contains tf.get_default_graph() to tf.compat.v1.get_default_graph()
  5. It should work now
  6. If the image still doesn't show after the last step is over, run again the second step with matplotlib and then run again the last step and it should appear. Repeat this again if it still doesn't.

Tested on docker image: rocm/tensorflow:latest which uses: tensorboard 2.1.1 tensorflow-estimator 2.1.0 tensorflow-rocm 2.1.0 protobuf 3.11.3 Pillow 7.1.1 pip 20.0.2 jupyter 1.0.0 jupyter-client 6.1.0 jupyter-console 6.1.0 jupyter-core 4.6.3 Keras-Applications 1.0.8 Keras-Preprocessing 1.1.0 matplotlib 3.2.1 kiwisolver 1.2.0 numpy 1.18.2 ...

Also my cpu doesn't have AVX support but has SSE 4.1 and SSE 4.2

Output of dpkg -l | grep rocm: ii comgr - 1.6.0.121-rocm-rel-3.1-44-cbb02f9 ii hip-base - 3.1.20086.4516-rocm-rel-3.1-44-8ef00e2d ii hip-doc - 3.1.20086.4516-rocm-rel-3.1-44-8ef00e2d ii hip-hcc - 3.1.20086.4516-rocm-rel-3.1-44-8ef00e2d ii hip-samples - 3.1.20086.4516-rocm-rel-3.1-44-8ef00e2d ii hipblas - 0.20.0.307-rocm-rel-3.1-44-ff35c32 ii hipcub - 2.9.0.92-rocm-rel-3.1-44-40e1d66 ii hsa-ext-rocr-dev - 1.1.30100.0-rocm-rel-3.1-44-ecafeba1 ii hsa-rocr-dev - 1.1.30100.0-rocm-rel-3.1-44-ecafeba1 ii miopen-hip - 2.2.1.7633-rocm-rel-3.1-44-9218683 ii miopengemm - 1.1.6.647-rocm-rel-3.1-44-b51a125 ii rccl - 2.10.0-254-g31648ec-rocm-rel-3.1-44 ii rocblas - 2.14.1.1861-rocm-rel-3.1-44-cc49425 ii rocfft - 0.9.10.783-rocm-rel-3.1-44-b7f9ebe ii rocm-clang-ocl - 0.5.0.48-rocm-rel-3.1-44-fa039e7 ii rocm-cmake - 0.3.0.141-rocm-rel-3.1-44-1b9e698 ii rocm-debug-agent - 1.0.0 ii rocm-dev - 3.1.44 ii rocm-device-libs - 1.0.0.563-rocm-rel-3.1-44-8f441a8 ii rocm-opencl - 2.0.0-rocm-rel-3.1-44-8f28d95ad ii rocm-opencl-dev - 2.0.0-rocm-rel-3.1-44-8f28d95ad ii rocm-smi - 1.0.0-194-rocm-rel-3.1-44-g840011e ii rocm-smi-lib64 - 2.3.0.3.rocm-rel-3.1-44-a246aac ii rocm-utils - 3.1.44 ii rocminfo - 1.30100.0 ii rocprim - 2.9.0.952-rocm-rel-3.1-44-5fa0c79 ii rocrand - 2.10.0.657-rocm-rel-3.1-44-448c673 ii rocsparse - 1.8.4.726-rocm-rel-3.1-44-eb854f0

Thanks this works.

Murkor commented 4 years ago

After multiple tries. I got it working in this order:

  1. Compile object_detection by running pip3 install . in models/research folder
  2. Rename in/utils/label_map_util.py from object_detection.protos import string_int_label_map_pb2 to from protos import string_int_label_map_pb2
  3. Rename in /utils/label_map_util.py with tf.gfile.GFile(path, 'r') as fid: to with tf.io.gfile.GFile(path, 'r') as fid:
  4. Run jupyter notebook object_detection_tutorial.ipynb

Now inside jupyter:

  1. Rename %matplotlib inline to %matplotlib notebook so it shows the image at the end. For me plt.show() didn't worked but this method worked
  2. Rename od_graph_def = tf.GraphDef() with tf.gfile.GFile(PATH_TO_FROZEN_GRAPH, 'rb') as fid: to od_graph_def = tf.compat.v1.GraphDef() with tf.io.gfile.GFile(PATH_TO_FROZEN_GRAPH, 'rb') as fid:
  3. Rename tf.Session() to tf.compat.v1.Session()
  4. Rename any stuff that contains tf.get_default_graph() to tf.compat.v1.get_default_graph()
  5. It should work now
  6. If the image still doesn't show after the last step is over, run again the second step with matplotlib and then run again the last step and it should appear. Repeat this again if it still doesn't.

Tested on docker image: rocm/tensorflow:latest which uses: tensorboard 2.1.1 tensorflow-estimator 2.1.0 tensorflow-rocm 2.1.0 protobuf 3.11.3 Pillow 7.1.1 pip 20.0.2 jupyter 1.0.0 jupyter-client 6.1.0 jupyter-console 6.1.0 jupyter-core 4.6.3 Keras-Applications 1.0.8 Keras-Preprocessing 1.1.0 matplotlib 3.2.1 kiwisolver 1.2.0 numpy 1.18.2 ...

Also my cpu doesn't have AVX support but has SSE 4.1 and SSE 4.2

Output of dpkg -l | grep rocm: ii comgr - 1.6.0.121-rocm-rel-3.1-44-cbb02f9 ii hip-base - 3.1.20086.4516-rocm-rel-3.1-44-8ef00e2d ii hip-doc - 3.1.20086.4516-rocm-rel-3.1-44-8ef00e2d ii hip-hcc - 3.1.20086.4516-rocm-rel-3.1-44-8ef00e2d ii hip-samples - 3.1.20086.4516-rocm-rel-3.1-44-8ef00e2d ii hipblas - 0.20.0.307-rocm-rel-3.1-44-ff35c32 ii hipcub - 2.9.0.92-rocm-rel-3.1-44-40e1d66 ii hsa-ext-rocr-dev - 1.1.30100.0-rocm-rel-3.1-44-ecafeba1 ii hsa-rocr-dev - 1.1.30100.0-rocm-rel-3.1-44-ecafeba1 ii miopen-hip - 2.2.1.7633-rocm-rel-3.1-44-9218683 ii miopengemm - 1.1.6.647-rocm-rel-3.1-44-b51a125 ii rccl - 2.10.0-254-g31648ec-rocm-rel-3.1-44 ii rocblas - 2.14.1.1861-rocm-rel-3.1-44-cc49425 ii rocfft - 0.9.10.783-rocm-rel-3.1-44-b7f9ebe ii rocm-clang-ocl - 0.5.0.48-rocm-rel-3.1-44-fa039e7 ii rocm-cmake - 0.3.0.141-rocm-rel-3.1-44-1b9e698 ii rocm-debug-agent - 1.0.0 ii rocm-dev - 3.1.44 ii rocm-device-libs - 1.0.0.563-rocm-rel-3.1-44-8f441a8 ii rocm-opencl - 2.0.0-rocm-rel-3.1-44-8f28d95ad ii rocm-opencl-dev - 2.0.0-rocm-rel-3.1-44-8f28d95ad ii rocm-smi - 1.0.0-194-rocm-rel-3.1-44-g840011e ii rocm-smi-lib64 - 2.3.0.3.rocm-rel-3.1-44-a246aac ii rocm-utils - 3.1.44 ii rocminfo - 1.30100.0 ii rocprim - 2.9.0.952-rocm-rel-3.1-44-5fa0c79 ii rocrand - 2.10.0.657-rocm-rel-3.1-44-448c673 ii rocsparse - 1.8.4.726-rocm-rel-3.1-44-eb854f0

that works properly, thank you

Ravi-Nayak commented 4 years ago

First check your tensorflow version by running bellow code

import tensorflow as tf print(tf.version)

If it's 2 or above then pip install tensorflow==1.14.0 will run code successfully.

im still getting error after downgrading my tensorflow from 1.5 to 1.4 please can anyone say what is wrong with the code in 1.5 ?

jitesh321 commented 3 years ago

i am using tesorflow=1.15 still i am getting error of graphdef() missing can someone help please

perymerdeka commented 3 years ago

I got the same error,

 AttributeError: module 'tensorflow' has no attribute 'GraphDef'

where can import this module, I use the latest TensorFlow version?

AditaSukmaW commented 3 years ago

Hii, Im having the same issue and nothing seems to work.. The error that I get is:

2020-01-20 02:45:45.916084: W tensorflow/stream_executor/platform/default/dso_loader.cc:55] Could not load dynamic library 'cudart64_101.dll'; dlerror: cudart64_101.dll not found 2020-01-20 02:45:45.916282: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine. Traceback (most recent call last): File "C:/Users/frevo/PycharmProjects/CameraTracking/models/object_detection/object_detection_tutorial_CONVERTED.py", line 75, in od_graph_def = tf.GraphDef() AttributeError: module 'tensorflow' has no attribute 'GraphDef'

This is my code:

import numpy as np

import os
import six.moves.urllib as urllib
import sys
import tarfile
import tensorflow as tf
import zipfile

from collections import defaultdict
from io import StringIO
from matplotlib import pyplot as plt
from PIL import Image

import cv2
cap = cv2.VideoCapture(1)

# This is needed since the notebook is stored in the object_detection folder.
sys.path.append("..")

# ## Object detection imports
# Here are the imports from the object detection module.

# In[3]:

from utils import label_map_util

from utils import visualization_utils as vis_util

# # Model preparation

# ## Variables
#
# Any model exported using the `export_inference_graph.py` tool can be loaded here simply by changing `PATH_TO_CKPT` to point to a new .pb file.
#
# By default we use an "SSD with Mobilenet" model here. See the [detection model zoo](https://github.com/tensorflow/models/blob/master/object_detection/g3doc/detection_model_zoo.md) for a list of other models that can be run out-of-the-box with varying speeds and accuracies.

# In[4]:

# What model to download.
MODEL_NAME = 'ssd_mobilenet_v1_coco_11_06_2017'
MODEL_FILE = MODEL_NAME + '.tar.gz'
DOWNLOAD_BASE = 'http://download.tensorflow.org/models/object_detection/'

# Path to frozen detection graph. This is the actual model that is used for the object detection.
PATH_TO_CKPT = MODEL_NAME + '/frozen_inference_graph.pb'

# List of the strings that is used to add correct label for each box.
PATH_TO_LABELS = os.path.join('data', 'mscoco_label_map.pbtxt')

NUM_CLASSES = 90

# ## Download Model

# In[5]:

opener = urllib.request.URLopener()
opener.retrieve(DOWNLOAD_BASE + MODEL_FILE, MODEL_FILE)
tar_file = tarfile.open(MODEL_FILE)
for file in tar_file.getmembers():
  file_name = os.path.basename(file.name)
  if 'frozen_inference_graph.pb' in file_name:
    tar_file.extract(file, os.getcwd())

# ## Load a (frozen) Tensorflow model into memory.

# In[6]:

detection_graph = tf.Graph()
with detection_graph.as_default():
  od_graph_def = tf.GraphDef()
  with tf.gfile.GFile(PATH_TO_CKPT, 'rb') as fid:
    serialized_graph = fid.read()
    od_graph_def.ParseFromString(serialized_graph)
    tf.import_graph_def(od_graph_def, name='')

# ## Loading label map
# Label maps map indices to category names, so that when our convolution network predicts `5`, we know that this corresponds to `airplane`.  Here we use internal utility functions, but anything that returns a dictionary mapping integers to appropriate string labels would be fine

# In[7]:

label_map = label_map_util.load_labelmap(PATH_TO_LABELS)
categories = label_map_util.convert_label_map_to_categories(label_map, max_num_classes=NUM_CLASSES, use_display_name=True)
category_index = label_map_util.create_category_index(categories)

# ## Helper code

# In[8]:

def load_image_into_numpy_array(image):
  (im_width, im_height) = image.size
  return np.array(image.getdata()).reshape(
      (im_height, im_width, 3)).astype(np.uint8)

# # Detection

# In[9]:

# For the sake of simplicity we will use only 2 images:
# image1.jpg
# image2.jpg
# If you want to test the code with your images, just add path to the images to the TEST_IMAGE_PATHS.
PATH_TO_TEST_IMAGES_DIR = 'test_images'
TEST_IMAGE_PATHS = [ os.path.join(PATH_TO_TEST_IMAGES_DIR, 'dollar.jpg'.format(i)) for i in range(1, 3) ]

# Size, in inches, of the output images.
IMAGE_SIZE = (12, 8)

# In[10]:

with detection_graph.as_default():
  with tf.Session(graph=detection_graph) as sess:
    while True:
      ret, image_np = cap.read()
      # Expand dimensions since the model expects images to have shape: [1, None, None, 3]
      image_np_expanded = np.expand_dims(image_np, axis=0)
      image_tensor = detection_graph.get_tensor_by_name('image_tensor:0')
      # Each box represents a part of the image where a particular object was detected.
      boxes = detection_graph.get_tensor_by_name('detection_boxes:0')
      # Each score represent how level of confidence for each of the objects.
      # Score is shown on the result image, together with the class label.
      scores = detection_graph.get_tensor_by_name('detection_scores:0')
      classes = detection_graph.get_tensor_by_name('detection_classes:0')
      num_detections = detection_graph.get_tensor_by_name('num_detections:0')
      # Actual detection.
      (boxes, scores, classes, num_detections) = sess.run(
          [boxes, scores, classes, num_detections],
          feed_dict={image_tensor: image_np_expanded})
      # Visualization of the results of a detection.
      vis_util.visualize_boxes_and_labels_on_image_array(
          image_np,
          np.squeeze(boxes),
          np.squeeze(classes).astype(np.int32),
          np.squeeze(scores),
          category_index,
          use_normalized_coordinates=True,
          line_thickness=8)

      cv2.imshow('object detection', cv2.resize(image_np, (800,600)))
      if cv2.waitKey(25) & 0xFF == ord('q'):
        cv2.destroyAllWindows()
        break

Can someone help me, Im struggling with this for almost 6 hours now :')

Hi your fault and mine are the same, how do you solve it?

aniiketdongare07 commented 3 years ago

import tensorflow.compat.v1 as tf for all symbols.

tf.compat.v1.disable_v2_behavior() calls inside main.

Try this? Aniket Dongare

On Fri, Jul 2, 2021, 9:59 PM AditaSukmaW @.***> wrote:

Hii, Im having the same issue and nothing seems to work.. The error that I get is:

2020-01-20 02:45:45.916084: W tensorflow/stream_executor/platform/default/dso_loader.cc:55] Could not load dynamic library 'cudart64_101.dll'; dlerror: cudart64_101.dll not found 2020-01-20 02:45:45.916282: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine. Traceback (most recent call last): File "C:/Users/frevo/PycharmProjects/CameraTracking/models/object_detection/object_detection_tutorial_CONVERTED.py", line 75, in od_graph_def = tf.GraphDef() AttributeError: module 'tensorflow' has no attribute 'GraphDef'

This is my code:

import numpy as np

import os import six.moves.urllib as urllib import sys import tarfile import tensorflow as tf import zipfile

from collections import defaultdict from io import StringIO from matplotlib import pyplot as plt from PIL import Image

import cv2 cap = cv2.VideoCapture(1)

This is needed since the notebook is stored in the object_detection folder.

sys.path.append("..")

Object detection imports

Here are the imports from the object detection module.

In[3]:

from utils import label_map_util

from utils import visualization_utils as vis_util

Model preparation

Variables

#

Any model exported using the export_inference_graph.py tool can be loaded here simply by changing PATH_TO_CKPT to point to a new .pb file.

#

By default we use an "SSD with Mobilenet" model here. See the detection model zoo for a list of other models that can be run out-of-the-box with varying speeds and accuracies.

In[4]:

What model to download.

MODEL_NAME = 'ssd_mobilenet_v1_coco_11_06_2017' MODEL_FILE = MODEL_NAME + '.tar.gz' DOWNLOAD_BASE = 'http://download.tensorflow.org/models/object_detection/'

Path to frozen detection graph. This is the actual model that is used for the object detection.

PATH_TO_CKPT = MODEL_NAME + '/frozen_inference_graph.pb'

List of the strings that is used to add correct label for each box.

PATH_TO_LABELS = os.path.join('data', 'mscoco_label_map.pbtxt')

NUM_CLASSES = 90

Download Model

In[5]:

opener = urllib.request.URLopener() opener.retrieve(DOWNLOAD_BASE + MODEL_FILE, MODEL_FILE) tar_file = tarfile.open(MODEL_FILE) for file in tar_file.getmembers(): file_name = os.path.basename(file.name) if 'frozen_inference_graph.pb' in file_name: tar_file.extract(file, os.getcwd())

Load a (frozen) Tensorflow model into memory.

In[6]:

detection_graph = tf.Graph() with detection_graph.as_default(): od_graph_def = tf.GraphDef() with tf.gfile.GFile(PATH_TO_CKPT, 'rb') as fid: serialized_graph = fid.read() od_graph_def.ParseFromString(serialized_graph) tf.import_graph_def(od_graph_def, name='')

Loading label map

Label maps map indices to category names, so that when our convolution network predicts 5, we know that this corresponds to airplane. Here we use internal utility functions, but anything that returns a dictionary mapping integers to appropriate string labels would be fine

In[7]:

label_map = label_map_util.load_labelmap(PATH_TO_LABELS) categories = label_map_util.convert_label_map_to_categories(label_map, max_num_classes=NUM_CLASSES, use_display_name=True) category_index = label_map_util.create_category_index(categories)

Helper code

In[8]:

def load_image_into_numpy_array(image): (im_width, im_height) = image.size return np.array(image.getdata()).reshape( (im_height, im_width, 3)).astype(np.uint8)

Detection

In[9]:

For the sake of simplicity we will use only 2 images:

image1.jpg

image2.jpg

If you want to test the code with your images, just add path to the images to the TEST_IMAGE_PATHS.

PATH_TO_TEST_IMAGES_DIR = 'test_images' TEST_IMAGE_PATHS = [ os.path.join(PATH_TO_TEST_IMAGES_DIR, 'dollar.jpg'.format(i)) for i in range(1, 3) ]

Size, in inches, of the output images.

IMAGE_SIZE = (12, 8)

In[10]:

with detection_graph.as_default(): with tf.Session(graph=detection_graph) as sess: while True: ret, image_np = cap.read()

Expand dimensions since the model expects images to have shape: [1, None, None, 3]

  image_np_expanded = np.expand_dims(image_np, axis=0)
  image_tensor = detection_graph.get_tensor_by_name('image_tensor:0')
  # Each box represents a part of the image where a particular object was detected.
  boxes = detection_graph.get_tensor_by_name('detection_boxes:0')
  # Each score represent how level of confidence for each of the objects.
  # Score is shown on the result image, together with the class label.
  scores = detection_graph.get_tensor_by_name('detection_scores:0')
  classes = detection_graph.get_tensor_by_name('detection_classes:0')
  num_detections = detection_graph.get_tensor_by_name('num_detections:0')
  # Actual detection.
  (boxes, scores, classes, num_detections) = sess.run(
      [boxes, scores, classes, num_detections],
      feed_dict={image_tensor: image_np_expanded})
  # Visualization of the results of a detection.
  vis_util.visualize_boxes_and_labels_on_image_array(
      image_np,
      np.squeeze(boxes),
      np.squeeze(classes).astype(np.int32),
      np.squeeze(scores),
      category_index,
      use_normalized_coordinates=True,
      line_thickness=8)

  cv2.imshow('object detection', cv2.resize(image_np, (800,600)))
  if cv2.waitKey(25) & 0xFF == ord('q'):
    cv2.destroyAllWindows()
    break

Can someone help me, Im struggling with this for almost 6 hours now :')

Hi your fault and mine are the same, how do you solve it?

— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/tensorflow/models/issues/7703#issuecomment-873118733, or unsubscribe https://github.com/notifications/unsubscribe-auth/AKOTCKCXXMITMH7I7NU56JDTVXSM7ANCNFSM4JDMHOUQ .