Open aniketbote opened 3 years ago
Hi @aniketbote I posted this answer in Stack Overflow: https://stackoverflow.com/questions/66373169/tensorflow-2-object-detection-api-numpy-version-errors/66486051#66486051
I had this same issue:
NotImplementedError: Cannot convert a symbolic Tensor (cond_2/strided_slice:0) to a numpy array. This error may indicate that you're trying to pass a Tensor to a NumPy call, which is not supported
The problem was fixed by changing
np.prod
forreduce_prod
in this function https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/ops/array_ops.pydef _constant_if_small(value, shape, dtype, name): try: if np.prod(shape) < 1000: return constant(value, shape=shape, dtype=dtype, name=name) except TypeError: # Happens when shape is a Tensor, list with Tensor elements, etc. pass return None
Note that you need to import
reduce_prod
at the top of the file:from tensorflow.math import reduce_prod
Wonderful.
In my case, it works for:
I changed the import from tensorflow.math import reduce_prod
to import tensorflow as tf
and in the def _constant_if_small
function I used:
tf.math.reduce_prod
according to the tf.math documentation for tf 2.5: https://www.tensorflow.org/api_docs/python/tf/math/reduce_prod?hl=ar
I am having the same error, I am also using the latest version of anaconda with TensorFlow version 2.3.0.
The program was working with the general installation of TensorFlow with pip. It is not working with anaconda.
@MeghanshBansal try to replace the lines on C:\Users\USERNAME\anaconda3\Lib\site-packages\tensorflow\python\ops\array_ops.py
Idk if this will help anyone, but I also got the NotImplementedError: Cannot convert a symbolic Tensor (strided_slice:0) to a numpy array.
error. I don't know if all the previous error messages were the same, and I am kicking myself for not saving them, but I was very baffled by the problem.
I run my jupyter notebook in a conda envirnoment I have called tensorflow. Here is the .yml file I build it from:
name: tensorflow
channels:
- conda-forge
- anaconda
dependencies:
- python=3.8
- pip>=19.0
- ipykernel
- jupyter
- jupyterlab
- scikit-learn
- scipy
- pandas
- pandas-datareader
- matplotlib
- pillow
- tqdm
- requests
- h5py
- pyyaml
- flask
- boto3
- pip:
- tensorflow==2.4
- bayesian-optimization
- gym
- kaggle
It ran perfectly fine a few days ago, I took a break and tried to run the exact same code today, and I got that error above! I changed nothing, and it was very very confusing. I just restarted my computer and suddenly the issue is gone. Does anyone have an explaination? I'm not sure how to recreate it, but here is what I had
from tensorflow.keras.callbacks import LambdaCallback
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense
from tensorflow.keras.layers import LSTM
from tensorflow.keras.optimizers import RMSprop
from tensorflow.keras.utils import get_file
from tensorflow.python.ops.math_ops import reduce_prod
import numpy as np
import pandas as pd
import random
import sys
import io
import requests
import re
print('Build model')
model = Sequential()
model.add(LSTM(128, input_shape=(14, 1)))
model.add( Dense(9, activation='softmax') )
optimizer = RMSprop(lr=0.01)
model.compile(loss='categorical_crossentropy', optimizer=optimizer)
Edit: I was reading and tried the reduce_prod import which wasn't working and forgot it was in my list of imports. Commenting it out still runs without that error.
After edit the array_ops.py
as mentioned above, I still have errors:
---------------------------------------------------------------------------
NotImplementedError Traceback (most recent call last)
/tmp/ipykernel_6878/1920925617.py in <module>
34 # fit model
35 es = EarlyStopping(monitor='val_loss', mode='min', verbose=1,patience=pat)
---> 36 history=model.fit(X, y, batch_size=batch_size, epochs=n_epochs, verbose=1, shuffle=False, validation_split=val_split, callbacks=[es])
37
38 model.save(model_name)
~/miniconda3/envs/tf0/lib/python3.8/site-packages/keras/engine/training.py in fit(self, x, y, batch_size, epochs, verbose, callbacks, validation_split, validation_data, shuffle, class_weight, sample_weight, initial_epoch, steps_per_epoch, validation_steps, validation_batch_size, validation_freq, max_queue_size, workers, use_multiprocessing)
1156 _r=1):
1157 callbacks.on_train_batch_begin(step)
-> 1158 tmp_logs = self.train_function(iterator)
1159 if data_handler.should_sync:
1160 context.async_wait()
~/miniconda3/envs/tf0/lib/python3.8/site-packages/tensorflow/python/eager/def_function.py in __call__(self, *args, **kwds)
887
888 with OptionalXlaContext(self._jit_compile):
--> 889 result = self._call(*args, **kwds)
890
891 new_tracing_count = self.experimental_get_tracing_count()
~/miniconda3/envs/tf0/lib/python3.8/site-packages/tensorflow/python/eager/def_function.py in _call(self, *args, **kwds)
931 # This is the first call of __call__, so we have to initialize.
932 initializers = []
--> 933 self._initialize(args, kwds, add_initializers_to=initializers)
934 finally:
935 # At this point we know that the initialization is complete (or less
~/miniconda3/envs/tf0/lib/python3.8/site-packages/tensorflow/python/eager/def_function.py in _initialize(self, args, kwds, add_initializers_to)
761 self._graph_deleter = FunctionDeleter(self._lifted_initializer_graph)
762 self._concrete_stateful_fn = (
--> 763 self._stateful_fn._get_concrete_function_internal_garbage_collected( # pylint: disable=protected-access
764 *args, **kwds))
765
~/miniconda3/envs/tf0/lib/python3.8/site-packages/tensorflow/python/eager/function.py in _get_concrete_function_internal_garbage_collected(self, *args, **kwargs)
3048 args, kwargs = None, None
3049 with self._lock:
-> 3050 graph_function, _ = self._maybe_define_function(args, kwargs)
3051 return graph_function
3052
~/miniconda3/envs/tf0/lib/python3.8/site-packages/tensorflow/python/eager/function.py in _maybe_define_function(self, args, kwargs)
3442
3443 self._function_cache.missed.add(call_context_key)
-> 3444 graph_function = self._create_graph_function(args, kwargs)
3445 self._function_cache.primary[cache_key] = graph_function
3446
~/miniconda3/envs/tf0/lib/python3.8/site-packages/tensorflow/python/eager/function.py in _create_graph_function(self, args, kwargs, override_flat_arg_shapes)
3277 arg_names = base_arg_names + missing_arg_names
3278 graph_function = ConcreteFunction(
-> 3279 func_graph_module.func_graph_from_py_func(
3280 self._name,
3281 self._python_function,
~/miniconda3/envs/tf0/lib/python3.8/site-packages/tensorflow/python/framework/func_graph.py in func_graph_from_py_func(name, python_func, args, kwargs, signature, func_graph, autograph, autograph_options, add_control_dependencies, arg_names, op_return_value, collections, capture_by_value, override_flat_arg_shapes)
997 _, original_func = tf_decorator.unwrap(python_func)
998
--> 999 func_outputs = python_func(*func_args, **func_kwargs)
1000
1001 # invariant: `func_outputs` contains only Tensors, CompositeTensors,
~/miniconda3/envs/tf0/lib/python3.8/site-packages/tensorflow/python/eager/def_function.py in wrapped_fn(*args, **kwds)
670 # the function a weak reference to itself to avoid a reference cycle.
671 with OptionalXlaContext(compile_with_xla):
--> 672 out = weak_wrapped_fn().__wrapped__(*args, **kwds)
673 return out
674
~/miniconda3/envs/tf0/lib/python3.8/site-packages/tensorflow/python/framework/func_graph.py in wrapper(*args, **kwargs)
984 except Exception as e: # pylint:disable=broad-except
985 if hasattr(e, "ag_error_metadata"):
--> 986 raise e.ag_error_metadata.to_exception(e)
987 else:
988 raise
NotImplementedError: in user code:
/home/yl/miniconda3/envs/tf0/lib/python3.8/site-packages/keras/engine/training.py:830 train_function *
return step_function(self, iterator)
/home/yl/miniconda3/envs/tf0/lib/python3.8/site-packages/keras/engine/training.py:813 run_step *
outputs = model.train_step(data)
/home/yl/miniconda3/envs/tf0/lib/python3.8/site-packages/keras/engine/training.py:770 train_step *
y_pred = self(x, training=True)
/home/yl/miniconda3/envs/tf0/lib/python3.8/site-packages/keras/engine/base_layer.py:1006 __call__ *
outputs = call_fn(inputs, *args, **kwargs)
/home/yl/miniconda3/envs/tf0/lib/python3.8/site-packages/keras/engine/sequential.py:389 call *
outputs = layer(inputs, **kwargs)
/home/yl/miniconda3/envs/tf0/lib/python3.8/site-packages/keras/layers/recurrent.py:660 __call__ *
return super(RNN, self).__call__(inputs, **kwargs)
/home/yl/miniconda3/envs/tf0/lib/python3.8/site-packages/keras/engine/base_layer.py:1006 __call__ *
outputs = call_fn(inputs, *args, **kwargs)
/home/yl/miniconda3/envs/tf0/lib/python3.8/site-packages/keras/layers/recurrent_v2.py:1139 call *
inputs, initial_state, _ = self._process_inputs(inputs, initial_state, None)
/home/yl/miniconda3/envs/tf0/lib/python3.8/site-packages/keras/layers/recurrent.py:860 _process_inputs *
initial_state = self.get_initial_state(inputs)
/home/yl/miniconda3/envs/tf0/lib/python3.8/site-packages/keras/layers/recurrent.py:642 get_initial_state *
init_state = get_initial_state_fn(
/home/yl/miniconda3/envs/tf0/lib/python3.8/site-packages/keras/layers/recurrent.py:2509 get_initial_state *
self, inputs, batch_size, dtype))
/home/yl/miniconda3/envs/tf0/lib/python3.8/site-packages/keras/layers/recurrent.py:2990 _generate_zero_filled_state_for_cell *
return _generate_zero_filled_state(batch_size, cell.state_size, dtype)
/home/yl/miniconda3/envs/tf0/lib/python3.8/site-packages/keras/layers/recurrent.py:3003 create_zeros *
return tf.zeros(init_state_size, dtype=dtype)
/home/yl/miniconda3/envs/tf0/lib/python3.8/site-packages/tensorflow/python/util/dispatch.py:206 wrapper **
return target(*args, **kwargs)
/home/yl/miniconda3/envs/tf0/lib/python3.8/site-packages/tensorflow/python/ops/array_ops.py:2911 wrapped
def wrapped(*args, **kwargs):
/home/yl/miniconda3/envs/tf0/lib/python3.8/site-packages/tensorflow/python/ops/array_ops.py:2960 zeros
# op to prevent serialized GraphDefs from becoming too large.
/home/yl/miniconda3/envs/tf0/lib/python3.8/site-packages/tensorflow/python/ops/array_ops.py:2896 _constant_if_small
try:
<__array_function__ internals>:5 prod
/home/yl/miniconda3/envs/tf0/lib/python3.8/site-packages/numpy/core/fromnumeric.py:3051 prod
return _wrapreduction(a, np.multiply, 'prod', axis, dtype, out,
/home/yl/miniconda3/envs/tf0/lib/python3.8/site-packages/numpy/core/fromnumeric.py:86 _wrapreduction
return ufunc.reduce(obj, axis, dtype, out, **passkwargs)
/home/yl/miniconda3/envs/tf0/lib/python3.8/site-packages/tensorflow/python/framework/ops.py:867 __array__
raise NotImplementedError(
NotImplementedError: Cannot convert a symbolic Tensor (sequential_5/lstm_10/strided_slice:0) to a numpy array. This error may indicate that you're trying to pass a Tensor to a NumPy call, which is not supported
I use :
Tensorflow 2.5.0
numpy 1.19.5
python 3.8.12
@yangleir Update your numpy version to the last one.
@glemarivero
Hi @aniketbote I posted this answer in Stack Overflow: https://stackoverflow.com/questions/66373169/tensorflow-2-object-detection-api-numpy-version-errors/66486051#66486051 I had this same issue:
NotImplementedError: Cannot convert a symbolic Tensor (cond_2/strided_slice:0) to a numpy array. This error may indicate that you're trying to pass a Tensor to a NumPy call, which is not supported
The problem was fixed by changing
np.prod
forreduce_prod
in this function https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/ops/array_ops.pydef _constant_if_small(value, shape, dtype, name): try: if np.prod(shape) < 1000: return constant(value, shape=shape, dtype=dtype, name=name) except TypeError: # Happens when shape is a Tensor, list with Tensor elements, etc. pass return None
Note that you need to import
reduce_prod
at the top of the file:from tensorflow.math import reduce_prod
I was able to fix issue like you described but by importing
reduc_prod
as:from tensorflow.python.ops.math_ops import reduce_prod ...
Seems like it is a bug in
tensorflow
For cpu version, I found from tensorflow.math import reduce_prod
works.
For gpu version, I found from tensorflow.python.ops.math_ops import reduce_prod
works.
Is this change proposed by @athenasaurav expected to be released anytime soon? Modifying a file from the package after installing does not seem like a good solution to me. Neither does downgrading numpy, especially if the rest of my code uses features only available in numpy 1.20.
Suddenly experiencing this problem in google colab with object detection API and TF1 with code that worked perfectly fine last week :thinking:
I tried the suggested solution of downgrading to an older version of numpy, but it seems that something in the object detection api is built using numpy >= 1.20 because I get this error:
RuntimeError: module compiled against API version 0xe but this version of numpy is 0xd
NotImplementedError: Cannot convert a symbolic Tensor (cond_2/strided_slice:0) to a numpy array. This error may indicate that you're trying to pass a Tensor to a NumPy call, which is not supported
from tensorflow.python.ops.math_ops import reduce_prod
the above import package working fine.
Locations : -\Lib\site-packages\tensorflow_core\python\ops\array_ops.py
I have the same error when training the data, even also try to downgrade the NumPy version, but it not be fixed. Finally, I solved that issue. Try the below command:
!pip install numpy==1.17.4
!pip install pycocotools==2.0.0
This is the results I get:
I have the same error as what you are getting. im using numpy 1.20.0 with Tensorflow 2.4.1 I'm convinced that numpy is the problem but im honestly too new at training models and using TF etc. This is the tutorial ive been following https://tensorflow-object-detection-api-tutorial.readthedocs.io/en/latest/training.html Have you had any luck with solving this issue?
Try to downgrade numpy to numpy 1.19.5 pypi_0 pypi
pip install numpy==1.19.5
after that, check with
conda list
which numpy version you have installed
this worked for me too.
I had the same problem when using colab, fixed it by reinstalling numpy==1.19.0 First I tried to switch to python3.6 as dademiller360 said, but somehow I failed cuz Idk how to change the python environment of tf. Then I simply used the following order at colab: !python -m pip install -U numpy==1.19.0 and it worked!
This worked for me:
OS: Windows 10 Python: 3.7.4 Virtualization: virtualenv
call python -m pip install tensorflow==2.4.1 --force-reinstall call python -m pip install numpy==1.19.5 --force-reinstall call python -m pip install h5py==2.10.0 --force-reinstall
simply change the tensorflow or tensorflow-gpu to older versions: $ pip install tensorflow-gpu==2.5.0
This works fine for me:
OS: MacOS Monterey Python: 3.8.16 (with Anaconda)
tensorflow: 2.11.0 numpy: 1.22.4
This was the the versions the packages on Google Colab where things are working (In particular i found the bug when working with the GPFlow library)
downgrade the numpy worked for me pip install numpy==1.19.5
Prerequisites
Please answer the following questions for yourself before submitting an issue.
1. The entire URL of the file you are using
https://github.com/tensorflow/models/blob/master/research/object_detection/model_main_tf2.py
2. Describe the bug
I am trying to train object detection model for custom data using tutorial on link. I tested the for environment faults using https://github.com/tensorflow/models/blob/master/research/object_detection/builders/model_builder_tf2_test.py. All test passed. But when I put it for training the models gives out error. Logs:
6. System information