tensorflow / models

Models and examples built with TensorFlow
Other
77.16k stars 45.75k forks source link

NotImplementedError: Cannot convert a symbolic Tensor (strided_slice:0) to a numpy array. #9706

Open aniketbote opened 3 years ago

aniketbote commented 3 years ago

Prerequisites

Please answer the following questions for yourself before submitting an issue.

1. The entire URL of the file you are using

https://github.com/tensorflow/models/blob/master/research/object_detection/model_main_tf2.py

2. Describe the bug

I am trying to train object detection model for custom data using tutorial on link. I tested the for environment faults using https://github.com/tensorflow/models/blob/master/research/object_detection/builders/model_builder_tf2_test.py. All test passed. But when I put it for training the models gives out error. Logs:

2021-02-05 14:26:00.620416: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cudart64_101.dll
2021-02-05 14:26:03.557625: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library nvcuda.dll
2021-02-05 14:26:03.790381: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1561] Found device 0 with properties:
pciBusID: 0002:00:00.0 name: Tesla M60 computeCapability: 5.2
coreClock: 1.1775GHz coreCount: 16 deviceMemorySize: 7.92GiB deviceMemoryBandwidth: 149.31GiB/s
2021-02-05 14:26:03.796912: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cudart64_101.dll
2021-02-05 14:26:03.805410: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cublas64_10.dll
2021-02-05 14:26:03.813387: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cufft64_10.dll
2021-02-05 14:26:03.817933: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library curand64_10.dll
2021-02-05 14:26:03.827291: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cusolver64_10.dll
2021-02-05 14:26:03.834728: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cusparse64_10.dll
2021-02-05 14:26:03.850086: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cudnn64_7.dll
2021-02-05 14:26:03.857292: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1703] Adding visible gpu devices: 0
2021-02-05 14:26:03.860948: I tensorflow/core/platform/cpu_feature_guard.cc:143] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2
2021-02-05 14:26:03.875535: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x80ec5ff940 initialized for platform Host (this does not guarantee that XLA will be used). Devices:
2021-02-05 14:26:03.880482: I tensorflow/compiler/xla/service/service.cc:176]   StreamExecutor device (0): Host, Default Version
2021-02-05 14:26:03.885168: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1561] Found device 0 with properties:
pciBusID: 0002:00:00.0 name: Tesla M60 computeCapability: 5.2
coreClock: 1.1775GHz coreCount: 16 deviceMemorySize: 7.92GiB deviceMemoryBandwidth: 149.31GiB/s
2021-02-05 14:26:03.891972: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cudart64_101.dll
2021-02-05 14:26:03.895720: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cublas64_10.dll
2021-02-05 14:26:03.899501: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cufft64_10.dll
2021-02-05 14:26:03.904606: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library curand64_10.dll
2021-02-05 14:26:03.908422: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cusolver64_10.dll
2021-02-05 14:26:03.912343: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cusparse64_10.dll
2021-02-05 14:26:03.916413: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cudnn64_7.dll
2021-02-05 14:26:03.924000: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1703] Adding visible gpu devices: 0
2021-02-05 14:26:04.700603: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1102] Device interconnect StreamExecutor with strength
1 edge matrix:
2021-02-05 14:26:04.704509: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1108]      0
2021-02-05 14:26:04.706737: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1121] 0:   N
2021-02-05 14:26:04.726496: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1247] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 7048 MB memory) -> physical GPU (device: 0, name: Tesla M60, pci bus id: 0002:00:00.0, compute capability: 5.2)
2021-02-05 14:26:04.736932: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x810d9b6950 initialized for platform CUDA (this does not guarantee that XLA will be used). Devices:
2021-02-05 14:26:04.741982: I tensorflow/compiler/xla/service/service.cc:176]   StreamExecutor device (0): Tesla M60, Compute Capability 5.2
INFO:tensorflow:Using MirroredStrategy with devices ('/job:localhost/replica:0/task:0/device:GPU:0',)
I0205 14:26:04.749317  8188 mirrored_strategy.py:500] Using MirroredStrategy with devices ('/job:localhost/replica:0/task:0/device:GPU:0',)
INFO:tensorflow:Maybe overwriting train_steps: None
I0205 14:26:04.755326  8188 config_util.py:552] Maybe overwriting train_steps: None
INFO:tensorflow:Maybe overwriting use_bfloat16: False
I0205 14:26:04.755326  8188 config_util.py:552] Maybe overwriting use_bfloat16: False
INFO:tensorflow:Reading unweighted datasets: ['E:/DS_2020_Wildlife/Multi_Class_Classification/Tensorflow/workspace/annotations/train.record']
I0205 14:26:04.923312  8188 dataset_builder.py:163] Reading unweighted datasets: ['E:/DS_2020_Wildlife/Multi_Class_Classification/Tensorflow/workspace/annotations/train.record']
INFO:tensorflow:Reading record datasets for input file: ['E:/DS_2020_Wildlife/Multi_Class_Classification/Tensorflow/workspace/annotations/train.record']
I0205 14:26:04.926311  8188 dataset_builder.py:80] Reading record datasets for input file: ['E:/DS_2020_Wildlife/Multi_Class_Classification/Tensorflow/workspace/annotations/train.record']
INFO:tensorflow:Number of filenames to read: 1
I0205 14:26:04.926311  8188 dataset_builder.py:81] Number of filenames to read: 1
WARNING:tensorflow:num_readers has been reduced to 1 to match input file shards.
W0205 14:26:04.926311  8188 dataset_builder.py:87] num_readers has been reduced to 1 to match input file shards.
WARNING:tensorflow:From E:\DS_2020_Wildlife\Multi_Class_Classification\Tensorflow\venv\lib\site-packages\object_detection\builders\dataset_builder.py:101: parallel_interleave (from tensorflow.python.data.experimental.ops.interleave_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use `tf.data.Dataset.interleave(map_func, cycle_length, block_length, num_parallel_calls=tf.data.experimental.AUTOTUNE)` instead. If sloppy execution is desired, use `tf.data.Options.experimental_deterministic`.
W0205 14:26:04.928313  8188 deprecation.py:317] From E:\DS_2020_Wildlife\Multi_Class_Classification\Tensorflow\venv\lib\site-packages\object_detection\builders\dataset_builder.py:101: parallel_interleave (from tensorflow.python.data.experimental.ops.interleave_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use `tf.data.Dataset.interleave(map_func, cycle_length, block_length, num_parallel_calls=tf.data.experimental.AUTOTUNE)` instead. If sloppy execution is desired, use `tf.data.Options.experimental_deterministic`.
WARNING:tensorflow:From E:\DS_2020_Wildlife\Multi_Class_Classification\Tensorflow\venv\lib\site-packages\object_detection\builders\dataset_builder.py:236: DatasetV1.map_with_legacy_function (from tensorflow.python.data.ops.dataset_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use `tf.data.Dataset.map()
W0205 14:26:04.952313  8188 deprecation.py:317] From E:\DS_2020_Wildlife\Multi_Class_Classification\Tensorflow\venv\lib\site-packages\object_detection\builders\dataset_builder.py:236: DatasetV1.map_with_legacy_function (from tensorflow.python.data.ops.dataset_ops) is
deprecated and will be removed in a future version.
Instructions for updating:
Use `tf.data.Dataset.map()
Traceback (most recent call last):
  File "model_main_tf2.py", line 115, in <module>
    tf.compat.v1.app.run()
  File "E:\DS_2020_Wildlife\Multi_Class_Classification\Tensorflow\venv\lib\site-packages\tensorflow\python\platform\app.py", line 40,
in run
    _run(main=main, argv=argv, flags_parser=_parse_flags_tolerate_undef)
  File "E:\DS_2020_Wildlife\Multi_Class_Classification\Tensorflow\venv\lib\site-packages\absl\app.py", line 300, in run
    _run_main(main, args)
  File "E:\DS_2020_Wildlife\Multi_Class_Classification\Tensorflow\venv\lib\site-packages\absl\app.py", line 251, in _run_main
    sys.exit(main(argv))
  File "model_main_tf2.py", line 106, in main
    model_lib_v2.train_loop(
  File "E:\DS_2020_Wildlife\Multi_Class_Classification\Tensorflow\venv\lib\site-packages\object_detection\model_lib_v2.py", line 569,
in train_loop
    load_fine_tune_checkpoint(detection_model,
  File "E:\DS_2020_Wildlife\Multi_Class_Classification\Tensorflow\venv\lib\site-packages\object_detection\model_lib_v2.py", line 352,
in load_fine_tune_checkpoint
    features, labels = iter(input_dataset).next()
  File "E:\DS_2020_Wildlife\Multi_Class_Classification\Tensorflow\venv\lib\site-packages\tensorflow\python\distribute\input_lib.py", line 858, in __iter__
    iterators, element_spec = _create_iterators_per_worker_with_input_context(
  File "E:\DS_2020_Wildlife\Multi_Class_Classification\Tensorflow\venv\lib\site-packages\tensorflow\python\distribute\input_lib.py", line 1401, in _create_iterators_per_worker_with_input_context
    dataset = dataset_fn(ctx)
  File "E:\DS_2020_Wildlife\Multi_Class_Classification\Tensorflow\venv\lib\site-packages\object_detection\model_lib_v2.py", line 521,
in train_dataset_fn
    train_input = inputs.train_input(
  File "E:\DS_2020_Wildlife\Multi_Class_Classification\Tensorflow\venv\lib\site-packages\object_detection\inputs.py", line 893, in train_input
    dataset = INPUT_BUILDER_UTIL_MAP['dataset_build'](
  File "E:\DS_2020_Wildlife\Multi_Class_Classification\Tensorflow\venv\lib\site-packages\object_detection\builders\dataset_builder.py", line 251, in build
    dataset = dataset_map_fn(dataset, decoder.decode, batch_size,
  File "E:\DS_2020_Wildlife\Multi_Class_Classification\Tensorflow\venv\lib\site-packages\object_detection\builders\dataset_builder.py", line 236, in dataset_map_fn
    dataset = dataset.map_with_legacy_function(
  File "E:\DS_2020_Wildlife\Multi_Class_Classification\Tensorflow\venv\lib\site-packages\tensorflow\python\util\deprecation.py", line
324, in new_func
    return func(*args, **kwargs)
  File "E:\DS_2020_Wildlife\Multi_Class_Classification\Tensorflow\venv\lib\site-packages\tensorflow\python\data\ops\dataset_ops.py", line 2402, in map_with_legacy_function
    ParallelMapDataset(
  File "E:\DS_2020_Wildlife\Multi_Class_Classification\Tensorflow\venv\lib\site-packages\tensorflow\python\data\ops\dataset_ops.py", line 4016, in __init__
    self._map_func = StructuredFunctionWrapper(
  File "E:\DS_2020_Wildlife\Multi_Class_Classification\Tensorflow\venv\lib\site-packages\tensorflow\python\data\ops\dataset_ops.py", line 3196, in __init__
    self._function.add_to_graph(ops.get_default_graph())
  File "E:\DS_2020_Wildlife\Multi_Class_Classification\Tensorflow\venv\lib\site-packages\tensorflow\python\framework\function.py", line 544, in add_to_graph
    self._create_definition_if_needed()
  File "E:\DS_2020_Wildlife\Multi_Class_Classification\Tensorflow\venv\lib\site-packages\tensorflow\python\framework\function.py", line 376, in _create_definition_if_needed
    self._create_definition_if_needed_impl()
  File "E:\DS_2020_Wildlife\Multi_Class_Classification\Tensorflow\venv\lib\site-packages\tensorflow\python\framework\function.py", line 398, in _create_definition_if_needed_impl
    temp_graph = func_graph_from_py_func(
  File "E:\DS_2020_Wildlife\Multi_Class_Classification\Tensorflow\venv\lib\site-packages\tensorflow\python\framework\function.py", line 969, in func_graph_from_py_func
    outputs = func(*func_graph.inputs)
  File "E:\DS_2020_Wildlife\Multi_Class_Classification\Tensorflow\venv\lib\site-packages\tensorflow\python\data\ops\dataset_ops.py", line 3188, in wrapper_fn
    ret = _wrapper_helper(*args)
  File "E:\DS_2020_Wildlife\Multi_Class_Classification\Tensorflow\venv\lib\site-packages\tensorflow\python\data\ops\dataset_ops.py", line 3156, in _wrapper_helper
    ret = autograph.tf_convert(func, ag_ctx)(*nested_args)
  File "E:\DS_2020_Wildlife\Multi_Class_Classification\Tensorflow\venv\lib\site-packages\tensorflow\python\autograph\impl\api.py", line 265, in wrapper
    raise e.ag_error_metadata.to_exception(e)
NotImplementedError: in user code:

    E:\DS_2020_Wildlife\Multi_Class_Classification\Tensorflow\venv\lib\site-packages\object_detection\data_decoders\tf_example_decoder.py:524 default_groundtruth_weights  *
        [tf.shape(tensor_dict[fields.InputDataFields.groundtruth_boxes])[0]],
    E:\DS_2020_Wildlife\Multi_Class_Classification\Tensorflow\venv\lib\site-packages\tensorflow\python\ops\array_ops.py:2967 ones  **
        output = _constant_if_small(one, shape, dtype, name)
    E:\DS_2020_Wildlife\Multi_Class_Classification\Tensorflow\venv\lib\site-packages\tensorflow\python\ops\array_ops.py:2662 _constant_if_small
        if np.prod(shape) < 1000:
    <__array_function__ internals>:5 prod

    E:\DS_2020_Wildlife\Multi_Class_Classification\Tensorflow\venv\lib\site-packages\numpy\core\fromnumeric.py:3030 prod
        return _wrapreduction(a, np.multiply, 'prod', axis, dtype, out,
    E:\DS_2020_Wildlife\Multi_Class_Classification\Tensorflow\venv\lib\site-packages\numpy\core\fromnumeric.py:87 _wrapreduction
        return ufunc.reduce(obj, axis, dtype, out, **passkwargs)
    E:\DS_2020_Wildlife\Multi_Class_Classification\Tensorflow\venv\lib\site-packages\tensorflow\python\framework\ops.py:748 __array__
        raise NotImplementedError("Cannot convert a symbolic Tensor ({}) to a numpy"

    NotImplementedError: Cannot convert a symbolic Tensor (strided_slice:0) to a numpy array.

6. System information

td43 commented 3 years ago

Hi @aniketbote I posted this answer in Stack Overflow: https://stackoverflow.com/questions/66373169/tensorflow-2-object-detection-api-numpy-version-errors/66486051#66486051

I had this same issue:

NotImplementedError: Cannot convert a symbolic Tensor (cond_2/strided_slice:0) to a numpy array. This error may indicate that you're trying to pass a Tensor to a NumPy call, which is not supported

The problem was fixed by changing np.prod for reduce_prod in this function https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/ops/array_ops.py

def _constant_if_small(value, shape, dtype, name):
  try:
    if np.prod(shape) < 1000:
      return constant(value, shape=shape, dtype=dtype, name=name)
  except TypeError:
    # Happens when shape is a Tensor, list with Tensor elements, etc.
    pass
  return None

Note that you need to import reduce_prod at the top of the file:

from tensorflow.math import reduce_prod

Wonderful.

In my case, it works for:

I changed the import from tensorflow.math import reduce_prod to import tensorflow as tf and in the def _constant_if_small function I used: tf.math.reduce_prod according to the tf.math documentation for tf 2.5: https://www.tensorflow.org/api_docs/python/tf/math/reduce_prod?hl=ar

MeghanshBansal commented 3 years ago

I am having the same error, I am also using the latest version of anaconda with TensorFlow version 2.3.0.

The program was working with the general installation of TensorFlow with pip. It is not working with anaconda.

td43 commented 3 years ago

@MeghanshBansal try to replace the lines on C:\Users\USERNAME\anaconda3\Lib\site-packages\tensorflow\python\ops\array_ops.py

Matthew1309 commented 3 years ago

Idk if this will help anyone, but I also got the NotImplementedError: Cannot convert a symbolic Tensor (strided_slice:0) to a numpy array. error. I don't know if all the previous error messages were the same, and I am kicking myself for not saving them, but I was very baffled by the problem.

I run my jupyter notebook in a conda envirnoment I have called tensorflow. Here is the .yml file I build it from:

name: tensorflow

channels:
    - conda-forge
    - anaconda
dependencies:
    - python=3.8
    - pip>=19.0
    - ipykernel
    - jupyter
    - jupyterlab
    - scikit-learn
    - scipy
    - pandas
    - pandas-datareader
    - matplotlib
    - pillow
    - tqdm
    - requests
    - h5py
    - pyyaml
    - flask
    - boto3
    - pip:
        - tensorflow==2.4
        - bayesian-optimization
        - gym
        - kaggle

It ran perfectly fine a few days ago, I took a break and tried to run the exact same code today, and I got that error above! I changed nothing, and it was very very confusing. I just restarted my computer and suddenly the issue is gone. Does anyone have an explaination? I'm not sure how to recreate it, but here is what I had

from tensorflow.keras.callbacks import LambdaCallback
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense
from tensorflow.keras.layers import LSTM
from tensorflow.keras.optimizers import RMSprop
from tensorflow.keras.utils import get_file
from tensorflow.python.ops.math_ops import reduce_prod
import numpy as np
import pandas as pd
import random
import sys
import io
import requests
import re
print('Build model')
model = Sequential()
model.add(LSTM(128, input_shape=(14, 1)))
model.add( Dense(9, activation='softmax') )

optimizer = RMSprop(lr=0.01)
model.compile(loss='categorical_crossentropy', optimizer=optimizer)

Edit: I was reading and tried the reduce_prod import which wasn't working and forgot it was in my list of imports. Commenting it out still runs without that error.

yangleir commented 3 years ago

After edit the array_ops.py as mentioned above, I still have errors:

---------------------------------------------------------------------------
NotImplementedError                       Traceback (most recent call last)
/tmp/ipykernel_6878/1920925617.py in <module>
     34     # fit model
     35     es = EarlyStopping(monitor='val_loss', mode='min', verbose=1,patience=pat)
---> 36     history=model.fit(X, y, batch_size=batch_size, epochs=n_epochs, verbose=1, shuffle=False, validation_split=val_split, callbacks=[es])
     37 
     38     model.save(model_name)

~/miniconda3/envs/tf0/lib/python3.8/site-packages/keras/engine/training.py in fit(self, x, y, batch_size, epochs, verbose, callbacks, validation_split, validation_data, shuffle, class_weight, sample_weight, initial_epoch, steps_per_epoch, validation_steps, validation_batch_size, validation_freq, max_queue_size, workers, use_multiprocessing)
   1156                 _r=1):
   1157               callbacks.on_train_batch_begin(step)
-> 1158               tmp_logs = self.train_function(iterator)
   1159               if data_handler.should_sync:
   1160                 context.async_wait()

~/miniconda3/envs/tf0/lib/python3.8/site-packages/tensorflow/python/eager/def_function.py in __call__(self, *args, **kwds)
    887 
    888       with OptionalXlaContext(self._jit_compile):
--> 889         result = self._call(*args, **kwds)
    890 
    891       new_tracing_count = self.experimental_get_tracing_count()

~/miniconda3/envs/tf0/lib/python3.8/site-packages/tensorflow/python/eager/def_function.py in _call(self, *args, **kwds)
    931       # This is the first call of __call__, so we have to initialize.
    932       initializers = []
--> 933       self._initialize(args, kwds, add_initializers_to=initializers)
    934     finally:
    935       # At this point we know that the initialization is complete (or less

~/miniconda3/envs/tf0/lib/python3.8/site-packages/tensorflow/python/eager/def_function.py in _initialize(self, args, kwds, add_initializers_to)
    761     self._graph_deleter = FunctionDeleter(self._lifted_initializer_graph)
    762     self._concrete_stateful_fn = (
--> 763         self._stateful_fn._get_concrete_function_internal_garbage_collected(  # pylint: disable=protected-access
    764             *args, **kwds))
    765 

~/miniconda3/envs/tf0/lib/python3.8/site-packages/tensorflow/python/eager/function.py in _get_concrete_function_internal_garbage_collected(self, *args, **kwargs)
   3048       args, kwargs = None, None
   3049     with self._lock:
-> 3050       graph_function, _ = self._maybe_define_function(args, kwargs)
   3051     return graph_function
   3052 

~/miniconda3/envs/tf0/lib/python3.8/site-packages/tensorflow/python/eager/function.py in _maybe_define_function(self, args, kwargs)
   3442 
   3443           self._function_cache.missed.add(call_context_key)
-> 3444           graph_function = self._create_graph_function(args, kwargs)
   3445           self._function_cache.primary[cache_key] = graph_function
   3446 

~/miniconda3/envs/tf0/lib/python3.8/site-packages/tensorflow/python/eager/function.py in _create_graph_function(self, args, kwargs, override_flat_arg_shapes)
   3277     arg_names = base_arg_names + missing_arg_names
   3278     graph_function = ConcreteFunction(
-> 3279         func_graph_module.func_graph_from_py_func(
   3280             self._name,
   3281             self._python_function,

~/miniconda3/envs/tf0/lib/python3.8/site-packages/tensorflow/python/framework/func_graph.py in func_graph_from_py_func(name, python_func, args, kwargs, signature, func_graph, autograph, autograph_options, add_control_dependencies, arg_names, op_return_value, collections, capture_by_value, override_flat_arg_shapes)
    997         _, original_func = tf_decorator.unwrap(python_func)
    998 
--> 999       func_outputs = python_func(*func_args, **func_kwargs)
   1000 
   1001       # invariant: `func_outputs` contains only Tensors, CompositeTensors,

~/miniconda3/envs/tf0/lib/python3.8/site-packages/tensorflow/python/eager/def_function.py in wrapped_fn(*args, **kwds)
    670         # the function a weak reference to itself to avoid a reference cycle.
    671         with OptionalXlaContext(compile_with_xla):
--> 672           out = weak_wrapped_fn().__wrapped__(*args, **kwds)
    673         return out
    674 

~/miniconda3/envs/tf0/lib/python3.8/site-packages/tensorflow/python/framework/func_graph.py in wrapper(*args, **kwargs)
    984           except Exception as e:  # pylint:disable=broad-except
    985             if hasattr(e, "ag_error_metadata"):
--> 986               raise e.ag_error_metadata.to_exception(e)
    987             else:
    988               raise

NotImplementedError: in user code:

    /home/yl/miniconda3/envs/tf0/lib/python3.8/site-packages/keras/engine/training.py:830 train_function  *
        return step_function(self, iterator)
    /home/yl/miniconda3/envs/tf0/lib/python3.8/site-packages/keras/engine/training.py:813 run_step  *
        outputs = model.train_step(data)
    /home/yl/miniconda3/envs/tf0/lib/python3.8/site-packages/keras/engine/training.py:770 train_step  *
        y_pred = self(x, training=True)
    /home/yl/miniconda3/envs/tf0/lib/python3.8/site-packages/keras/engine/base_layer.py:1006 __call__  *
        outputs = call_fn(inputs, *args, **kwargs)
    /home/yl/miniconda3/envs/tf0/lib/python3.8/site-packages/keras/engine/sequential.py:389 call  *
        outputs = layer(inputs, **kwargs)
    /home/yl/miniconda3/envs/tf0/lib/python3.8/site-packages/keras/layers/recurrent.py:660 __call__  *
        return super(RNN, self).__call__(inputs, **kwargs)
    /home/yl/miniconda3/envs/tf0/lib/python3.8/site-packages/keras/engine/base_layer.py:1006 __call__  *
        outputs = call_fn(inputs, *args, **kwargs)
    /home/yl/miniconda3/envs/tf0/lib/python3.8/site-packages/keras/layers/recurrent_v2.py:1139 call  *
        inputs, initial_state, _ = self._process_inputs(inputs, initial_state, None)
    /home/yl/miniconda3/envs/tf0/lib/python3.8/site-packages/keras/layers/recurrent.py:860 _process_inputs  *
        initial_state = self.get_initial_state(inputs)
    /home/yl/miniconda3/envs/tf0/lib/python3.8/site-packages/keras/layers/recurrent.py:642 get_initial_state  *
        init_state = get_initial_state_fn(
    /home/yl/miniconda3/envs/tf0/lib/python3.8/site-packages/keras/layers/recurrent.py:2509 get_initial_state  *
        self, inputs, batch_size, dtype))
    /home/yl/miniconda3/envs/tf0/lib/python3.8/site-packages/keras/layers/recurrent.py:2990 _generate_zero_filled_state_for_cell  *
        return _generate_zero_filled_state(batch_size, cell.state_size, dtype)
    /home/yl/miniconda3/envs/tf0/lib/python3.8/site-packages/keras/layers/recurrent.py:3003 create_zeros  *
        return tf.zeros(init_state_size, dtype=dtype)
    /home/yl/miniconda3/envs/tf0/lib/python3.8/site-packages/tensorflow/python/util/dispatch.py:206 wrapper  **
        return target(*args, **kwargs)
    /home/yl/miniconda3/envs/tf0/lib/python3.8/site-packages/tensorflow/python/ops/array_ops.py:2911 wrapped
        def wrapped(*args, **kwargs):
    /home/yl/miniconda3/envs/tf0/lib/python3.8/site-packages/tensorflow/python/ops/array_ops.py:2960 zeros
        # op to prevent serialized GraphDefs from becoming too large.
    /home/yl/miniconda3/envs/tf0/lib/python3.8/site-packages/tensorflow/python/ops/array_ops.py:2896 _constant_if_small
        try:
    <__array_function__ internals>:5 prod

    /home/yl/miniconda3/envs/tf0/lib/python3.8/site-packages/numpy/core/fromnumeric.py:3051 prod
        return _wrapreduction(a, np.multiply, 'prod', axis, dtype, out,
    /home/yl/miniconda3/envs/tf0/lib/python3.8/site-packages/numpy/core/fromnumeric.py:86 _wrapreduction
        return ufunc.reduce(obj, axis, dtype, out, **passkwargs)
    /home/yl/miniconda3/envs/tf0/lib/python3.8/site-packages/tensorflow/python/framework/ops.py:867 __array__
        raise NotImplementedError(

    NotImplementedError: Cannot convert a symbolic Tensor (sequential_5/lstm_10/strided_slice:0) to a numpy array. This error may indicate that you're trying to pass a Tensor to a NumPy call, which is not supported

I use :

Tensorflow 2.5.0
numpy 1.19.5
python 3.8.12
td43 commented 3 years ago

@yangleir Update your numpy version to the last one.

Kimxbzheng commented 2 years ago

@glemarivero

Hi @aniketbote I posted this answer in Stack Overflow: https://stackoverflow.com/questions/66373169/tensorflow-2-object-detection-api-numpy-version-errors/66486051#66486051 I had this same issue:

NotImplementedError: Cannot convert a symbolic Tensor (cond_2/strided_slice:0) to a numpy array. This error may indicate that you're trying to pass a Tensor to a NumPy call, which is not supported

The problem was fixed by changing np.prod for reduce_prod in this function https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/ops/array_ops.py

def _constant_if_small(value, shape, dtype, name):
  try:
    if np.prod(shape) < 1000:
      return constant(value, shape=shape, dtype=dtype, name=name)
  except TypeError:
    # Happens when shape is a Tensor, list with Tensor elements, etc.
    pass
  return None

Note that you need to import reduce_prod at the top of the file:

from tensorflow.math import reduce_prod

I was able to fix issue like you described but by importing reduc_prod as:

from tensorflow.python.ops.math_ops import reduce_prod
...

Seems like it is a bug in tensorflow

For cpu version, I found from tensorflow.math import reduce_prod works. For gpu version, I found from tensorflow.python.ops.math_ops import reduce_prod works.

MarcelRobitaille commented 2 years ago

Is this change proposed by @athenasaurav expected to be released anytime soon? Modifying a file from the package after installing does not seem like a good solution to me. Neither does downgrading numpy, especially if the rest of my code uses features only available in numpy 1.20.

john-maidbot commented 2 years ago

Suddenly experiencing this problem in google colab with object detection API and TF1 with code that worked perfectly fine last week :thinking: I tried the suggested solution of downgrading to an older version of numpy, but it seems that something in the object detection api is built using numpy >= 1.20 because I get this error: RuntimeError: module compiled against API version 0xe but this version of numpy is 0xd

mukammilbasha commented 2 years ago

NotImplementedError: Cannot convert a symbolic Tensor (cond_2/strided_slice:0) to a numpy array. This error may indicate that you're trying to pass a Tensor to a NumPy call, which is not supported

from tensorflow.python.ops.math_ops import reduce_prod

the above import package working fine.

Locations : -\Lib\site-packages\tensorflow_core\python\ops\array_ops.py

image

MdJafirAshraf commented 2 years ago

[Solved]

I have the same error when training the data, even also try to downgrade the NumPy version, but it not be fixed. Finally, I solved that issue. Try the below command:

!pip install numpy==1.17.4 !pip install pycocotools==2.0.0

ahmedbr commented 2 years ago

This is the results I get: image

I have the same error as what you are getting. im using numpy 1.20.0 with Tensorflow 2.4.1 I'm convinced that numpy is the problem but im honestly too new at training models and using TF etc. This is the tutorial ive been following https://tensorflow-object-detection-api-tutorial.readthedocs.io/en/latest/training.html Have you had any luck with solving this issue?

Try to downgrade numpy to numpy 1.19.5 pypi_0 pypi

pip install numpy==1.19.5

after that, check with conda list which numpy version you have installed

this worked for me too.

mofagoulaopi commented 2 years ago

I had the same problem when using colab, fixed it by reinstalling numpy==1.19.0 First I tried to switch to python3.6 as dademiller360 said, but somehow I failed cuz Idk how to change the python environment of tf. Then I simply used the following order at colab: !python -m pip install -U numpy==1.19.0 and it worked!

firststepdev commented 2 years ago

This worked for me:

OS: Windows 10 Python: 3.7.4 Virtualization: virtualenv

call python -m pip install tensorflow==2.4.1 --force-reinstall call python -m pip install numpy==1.19.5 --force-reinstall call python -m pip install h5py==2.10.0 --force-reinstall

lghasemzadeh commented 1 year ago

simply change the tensorflow or tensorflow-gpu to older versions: $ pip install tensorflow-gpu==2.5.0

mattyred commented 1 year ago

This works fine for me:

OS: MacOS Monterey Python: 3.8.16 (with Anaconda)

tensorflow: 2.11.0 numpy: 1.22.4

This was the the versions the packages on Google Colab where things are working (In particular i found the bug when working with the GPFlow library)

JayKumarr commented 1 year ago

downgrade the numpy worked for me pip install numpy==1.19.5