tensorflow / models

Models and examples built with TensorFlow
Other
76.95k stars 45.79k forks source link

NotImplementedError: Cannot convert a symbolic Tensor (strided_slice:0) to a numpy array. #9706

Open aniketbote opened 3 years ago

aniketbote commented 3 years ago

Prerequisites

Please answer the following questions for yourself before submitting an issue.

1. The entire URL of the file you are using

https://github.com/tensorflow/models/blob/master/research/object_detection/model_main_tf2.py

2. Describe the bug

I am trying to train object detection model for custom data using tutorial on link. I tested the for environment faults using https://github.com/tensorflow/models/blob/master/research/object_detection/builders/model_builder_tf2_test.py. All test passed. But when I put it for training the models gives out error. Logs:

2021-02-05 14:26:00.620416: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cudart64_101.dll
2021-02-05 14:26:03.557625: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library nvcuda.dll
2021-02-05 14:26:03.790381: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1561] Found device 0 with properties:
pciBusID: 0002:00:00.0 name: Tesla M60 computeCapability: 5.2
coreClock: 1.1775GHz coreCount: 16 deviceMemorySize: 7.92GiB deviceMemoryBandwidth: 149.31GiB/s
2021-02-05 14:26:03.796912: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cudart64_101.dll
2021-02-05 14:26:03.805410: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cublas64_10.dll
2021-02-05 14:26:03.813387: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cufft64_10.dll
2021-02-05 14:26:03.817933: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library curand64_10.dll
2021-02-05 14:26:03.827291: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cusolver64_10.dll
2021-02-05 14:26:03.834728: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cusparse64_10.dll
2021-02-05 14:26:03.850086: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cudnn64_7.dll
2021-02-05 14:26:03.857292: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1703] Adding visible gpu devices: 0
2021-02-05 14:26:03.860948: I tensorflow/core/platform/cpu_feature_guard.cc:143] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2
2021-02-05 14:26:03.875535: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x80ec5ff940 initialized for platform Host (this does not guarantee that XLA will be used). Devices:
2021-02-05 14:26:03.880482: I tensorflow/compiler/xla/service/service.cc:176]   StreamExecutor device (0): Host, Default Version
2021-02-05 14:26:03.885168: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1561] Found device 0 with properties:
pciBusID: 0002:00:00.0 name: Tesla M60 computeCapability: 5.2
coreClock: 1.1775GHz coreCount: 16 deviceMemorySize: 7.92GiB deviceMemoryBandwidth: 149.31GiB/s
2021-02-05 14:26:03.891972: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cudart64_101.dll
2021-02-05 14:26:03.895720: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cublas64_10.dll
2021-02-05 14:26:03.899501: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cufft64_10.dll
2021-02-05 14:26:03.904606: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library curand64_10.dll
2021-02-05 14:26:03.908422: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cusolver64_10.dll
2021-02-05 14:26:03.912343: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cusparse64_10.dll
2021-02-05 14:26:03.916413: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cudnn64_7.dll
2021-02-05 14:26:03.924000: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1703] Adding visible gpu devices: 0
2021-02-05 14:26:04.700603: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1102] Device interconnect StreamExecutor with strength
1 edge matrix:
2021-02-05 14:26:04.704509: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1108]      0
2021-02-05 14:26:04.706737: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1121] 0:   N
2021-02-05 14:26:04.726496: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1247] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 7048 MB memory) -> physical GPU (device: 0, name: Tesla M60, pci bus id: 0002:00:00.0, compute capability: 5.2)
2021-02-05 14:26:04.736932: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x810d9b6950 initialized for platform CUDA (this does not guarantee that XLA will be used). Devices:
2021-02-05 14:26:04.741982: I tensorflow/compiler/xla/service/service.cc:176]   StreamExecutor device (0): Tesla M60, Compute Capability 5.2
INFO:tensorflow:Using MirroredStrategy with devices ('/job:localhost/replica:0/task:0/device:GPU:0',)
I0205 14:26:04.749317  8188 mirrored_strategy.py:500] Using MirroredStrategy with devices ('/job:localhost/replica:0/task:0/device:GPU:0',)
INFO:tensorflow:Maybe overwriting train_steps: None
I0205 14:26:04.755326  8188 config_util.py:552] Maybe overwriting train_steps: None
INFO:tensorflow:Maybe overwriting use_bfloat16: False
I0205 14:26:04.755326  8188 config_util.py:552] Maybe overwriting use_bfloat16: False
INFO:tensorflow:Reading unweighted datasets: ['E:/DS_2020_Wildlife/Multi_Class_Classification/Tensorflow/workspace/annotations/train.record']
I0205 14:26:04.923312  8188 dataset_builder.py:163] Reading unweighted datasets: ['E:/DS_2020_Wildlife/Multi_Class_Classification/Tensorflow/workspace/annotations/train.record']
INFO:tensorflow:Reading record datasets for input file: ['E:/DS_2020_Wildlife/Multi_Class_Classification/Tensorflow/workspace/annotations/train.record']
I0205 14:26:04.926311  8188 dataset_builder.py:80] Reading record datasets for input file: ['E:/DS_2020_Wildlife/Multi_Class_Classification/Tensorflow/workspace/annotations/train.record']
INFO:tensorflow:Number of filenames to read: 1
I0205 14:26:04.926311  8188 dataset_builder.py:81] Number of filenames to read: 1
WARNING:tensorflow:num_readers has been reduced to 1 to match input file shards.
W0205 14:26:04.926311  8188 dataset_builder.py:87] num_readers has been reduced to 1 to match input file shards.
WARNING:tensorflow:From E:\DS_2020_Wildlife\Multi_Class_Classification\Tensorflow\venv\lib\site-packages\object_detection\builders\dataset_builder.py:101: parallel_interleave (from tensorflow.python.data.experimental.ops.interleave_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use `tf.data.Dataset.interleave(map_func, cycle_length, block_length, num_parallel_calls=tf.data.experimental.AUTOTUNE)` instead. If sloppy execution is desired, use `tf.data.Options.experimental_deterministic`.
W0205 14:26:04.928313  8188 deprecation.py:317] From E:\DS_2020_Wildlife\Multi_Class_Classification\Tensorflow\venv\lib\site-packages\object_detection\builders\dataset_builder.py:101: parallel_interleave (from tensorflow.python.data.experimental.ops.interleave_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use `tf.data.Dataset.interleave(map_func, cycle_length, block_length, num_parallel_calls=tf.data.experimental.AUTOTUNE)` instead. If sloppy execution is desired, use `tf.data.Options.experimental_deterministic`.
WARNING:tensorflow:From E:\DS_2020_Wildlife\Multi_Class_Classification\Tensorflow\venv\lib\site-packages\object_detection\builders\dataset_builder.py:236: DatasetV1.map_with_legacy_function (from tensorflow.python.data.ops.dataset_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use `tf.data.Dataset.map()
W0205 14:26:04.952313  8188 deprecation.py:317] From E:\DS_2020_Wildlife\Multi_Class_Classification\Tensorflow\venv\lib\site-packages\object_detection\builders\dataset_builder.py:236: DatasetV1.map_with_legacy_function (from tensorflow.python.data.ops.dataset_ops) is
deprecated and will be removed in a future version.
Instructions for updating:
Use `tf.data.Dataset.map()
Traceback (most recent call last):
  File "model_main_tf2.py", line 115, in <module>
    tf.compat.v1.app.run()
  File "E:\DS_2020_Wildlife\Multi_Class_Classification\Tensorflow\venv\lib\site-packages\tensorflow\python\platform\app.py", line 40,
in run
    _run(main=main, argv=argv, flags_parser=_parse_flags_tolerate_undef)
  File "E:\DS_2020_Wildlife\Multi_Class_Classification\Tensorflow\venv\lib\site-packages\absl\app.py", line 300, in run
    _run_main(main, args)
  File "E:\DS_2020_Wildlife\Multi_Class_Classification\Tensorflow\venv\lib\site-packages\absl\app.py", line 251, in _run_main
    sys.exit(main(argv))
  File "model_main_tf2.py", line 106, in main
    model_lib_v2.train_loop(
  File "E:\DS_2020_Wildlife\Multi_Class_Classification\Tensorflow\venv\lib\site-packages\object_detection\model_lib_v2.py", line 569,
in train_loop
    load_fine_tune_checkpoint(detection_model,
  File "E:\DS_2020_Wildlife\Multi_Class_Classification\Tensorflow\venv\lib\site-packages\object_detection\model_lib_v2.py", line 352,
in load_fine_tune_checkpoint
    features, labels = iter(input_dataset).next()
  File "E:\DS_2020_Wildlife\Multi_Class_Classification\Tensorflow\venv\lib\site-packages\tensorflow\python\distribute\input_lib.py", line 858, in __iter__
    iterators, element_spec = _create_iterators_per_worker_with_input_context(
  File "E:\DS_2020_Wildlife\Multi_Class_Classification\Tensorflow\venv\lib\site-packages\tensorflow\python\distribute\input_lib.py", line 1401, in _create_iterators_per_worker_with_input_context
    dataset = dataset_fn(ctx)
  File "E:\DS_2020_Wildlife\Multi_Class_Classification\Tensorflow\venv\lib\site-packages\object_detection\model_lib_v2.py", line 521,
in train_dataset_fn
    train_input = inputs.train_input(
  File "E:\DS_2020_Wildlife\Multi_Class_Classification\Tensorflow\venv\lib\site-packages\object_detection\inputs.py", line 893, in train_input
    dataset = INPUT_BUILDER_UTIL_MAP['dataset_build'](
  File "E:\DS_2020_Wildlife\Multi_Class_Classification\Tensorflow\venv\lib\site-packages\object_detection\builders\dataset_builder.py", line 251, in build
    dataset = dataset_map_fn(dataset, decoder.decode, batch_size,
  File "E:\DS_2020_Wildlife\Multi_Class_Classification\Tensorflow\venv\lib\site-packages\object_detection\builders\dataset_builder.py", line 236, in dataset_map_fn
    dataset = dataset.map_with_legacy_function(
  File "E:\DS_2020_Wildlife\Multi_Class_Classification\Tensorflow\venv\lib\site-packages\tensorflow\python\util\deprecation.py", line
324, in new_func
    return func(*args, **kwargs)
  File "E:\DS_2020_Wildlife\Multi_Class_Classification\Tensorflow\venv\lib\site-packages\tensorflow\python\data\ops\dataset_ops.py", line 2402, in map_with_legacy_function
    ParallelMapDataset(
  File "E:\DS_2020_Wildlife\Multi_Class_Classification\Tensorflow\venv\lib\site-packages\tensorflow\python\data\ops\dataset_ops.py", line 4016, in __init__
    self._map_func = StructuredFunctionWrapper(
  File "E:\DS_2020_Wildlife\Multi_Class_Classification\Tensorflow\venv\lib\site-packages\tensorflow\python\data\ops\dataset_ops.py", line 3196, in __init__
    self._function.add_to_graph(ops.get_default_graph())
  File "E:\DS_2020_Wildlife\Multi_Class_Classification\Tensorflow\venv\lib\site-packages\tensorflow\python\framework\function.py", line 544, in add_to_graph
    self._create_definition_if_needed()
  File "E:\DS_2020_Wildlife\Multi_Class_Classification\Tensorflow\venv\lib\site-packages\tensorflow\python\framework\function.py", line 376, in _create_definition_if_needed
    self._create_definition_if_needed_impl()
  File "E:\DS_2020_Wildlife\Multi_Class_Classification\Tensorflow\venv\lib\site-packages\tensorflow\python\framework\function.py", line 398, in _create_definition_if_needed_impl
    temp_graph = func_graph_from_py_func(
  File "E:\DS_2020_Wildlife\Multi_Class_Classification\Tensorflow\venv\lib\site-packages\tensorflow\python\framework\function.py", line 969, in func_graph_from_py_func
    outputs = func(*func_graph.inputs)
  File "E:\DS_2020_Wildlife\Multi_Class_Classification\Tensorflow\venv\lib\site-packages\tensorflow\python\data\ops\dataset_ops.py", line 3188, in wrapper_fn
    ret = _wrapper_helper(*args)
  File "E:\DS_2020_Wildlife\Multi_Class_Classification\Tensorflow\venv\lib\site-packages\tensorflow\python\data\ops\dataset_ops.py", line 3156, in _wrapper_helper
    ret = autograph.tf_convert(func, ag_ctx)(*nested_args)
  File "E:\DS_2020_Wildlife\Multi_Class_Classification\Tensorflow\venv\lib\site-packages\tensorflow\python\autograph\impl\api.py", line 265, in wrapper
    raise e.ag_error_metadata.to_exception(e)
NotImplementedError: in user code:

    E:\DS_2020_Wildlife\Multi_Class_Classification\Tensorflow\venv\lib\site-packages\object_detection\data_decoders\tf_example_decoder.py:524 default_groundtruth_weights  *
        [tf.shape(tensor_dict[fields.InputDataFields.groundtruth_boxes])[0]],
    E:\DS_2020_Wildlife\Multi_Class_Classification\Tensorflow\venv\lib\site-packages\tensorflow\python\ops\array_ops.py:2967 ones  **
        output = _constant_if_small(one, shape, dtype, name)
    E:\DS_2020_Wildlife\Multi_Class_Classification\Tensorflow\venv\lib\site-packages\tensorflow\python\ops\array_ops.py:2662 _constant_if_small
        if np.prod(shape) < 1000:
    <__array_function__ internals>:5 prod

    E:\DS_2020_Wildlife\Multi_Class_Classification\Tensorflow\venv\lib\site-packages\numpy\core\fromnumeric.py:3030 prod
        return _wrapreduction(a, np.multiply, 'prod', axis, dtype, out,
    E:\DS_2020_Wildlife\Multi_Class_Classification\Tensorflow\venv\lib\site-packages\numpy\core\fromnumeric.py:87 _wrapreduction
        return ufunc.reduce(obj, axis, dtype, out, **passkwargs)
    E:\DS_2020_Wildlife\Multi_Class_Classification\Tensorflow\venv\lib\site-packages\tensorflow\python\framework\ops.py:748 __array__
        raise NotImplementedError("Cannot convert a symbolic Tensor ({}) to a numpy"

    NotImplementedError: Cannot convert a symbolic Tensor (strided_slice:0) to a numpy array.

6. System information

dademiller360 commented 3 years ago

System information OS Platform and Distribution (e.g., Linux Ubuntu 16.04): Archlinux TensorFlow installed from (source or binary): PIP TensorFlow version (use command below): 2.4.1 Python version: 3.8 CUDA/cuDNN version: cudaToolkit 10.1 cuDnn 7.6.5 GPU model and memory: Quadro M2000

I have the same error if I use numpy 1.20.0
NotImplementedError: Cannot convert a symbolic Tensor (cond_2/strided_slice:0) to a numpy array.

if I use numpy 1.19.5 I get ValueError: numpy.ndarray size changed, may indicate binary incompatibility. Expected 88 from C header, got 80 from PyObject

Tried with TF 2.2.2 as well in both cases same errors

dademiller360 commented 3 years ago

fixed using python 3.6

aniketbote commented 3 years ago

fixed using python 3.6

Thank you @dademiller360 changing the Python version from 3.8 to 3.6 fixed the issue. @saikumarchalla Can you clarify if this is intended behavior or a bug? If it's not a bug you can close the issue.

CloneHub94 commented 3 years ago

I have the same problem but when I modify from Python 3.8.5 to 3.6 I get the following error:

Traceback (most recent call last): File "D:\Maurice_Doc\AI\anaconda3\envs\tensorflow\lib\site-packages\tensorflow\python\pywrap_tensorflow.py", line 64, in from tensorflow.python._pywrap_tensorflow_internal import * ImportError: DLL load failed: The specified module could not be found.

During handling of the above exception, another exception occurred:

Traceback (most recent call last): File "model_main_tf2.py", line 31, in import tensorflow.compat.v2 as tf File "D:\Maurice_Doc\AI\anaconda3\envs\tensorflow\lib\site-packages\tensorflow__init.py", line 41, in from tensorflow.python.tools import module_util as _module_util File "D:\Maurice_Doc\AI\anaconda3\envs\tensorflow\lib\site-packages\tensorflow\python\init__.py", line 39, in from tensorflow.python import pywrap_tensorflow as _pywrap_tensorflow File "D:\Maurice_Doc\AI\anaconda3\envs\tensorflow\lib\site-packages\tensorflow\python\pywrap_tensorflow.py", line 83, in raise ImportError(msg) ImportError: Traceback (most recent call last): File "D:\Maurice_Doc\AI\anaconda3\envs\tensorflow\lib\site-packages\tensorflow\python\pywrap_tensorflow.py", line 64, in from tensorflow.python._pywrap_tensorflow_internal import * ImportError: DLL load failed: The specified module could not be found.

Failed to load the native TensorFlow runtime.

See https://www.tensorflow.org/install/errors

for some common reasons and solutions. Include the entire stack trace above this error message when asking for help.

Anybody know how to fix this issue?

AKuperus7 commented 3 years ago

Same issue, but when I switch to Python 3.6 and try to install the Tensorflow Object Detection API using the research/object_detection/packages/tf2/setup.py file, I get this error:

Traceback (most recent call last):
  File "C:\Users\tinyr\anaconda3\envs\squirrel\lib\site-packages\setuptools\sandbox.py", line 152, in save_modules
    yield saved
  File "C:\Users\tinyr\anaconda3\envs\squirrel\lib\site-packages\setuptools\sandbox.py", line 193, in setup_context
    yield
  File "C:\Users\tinyr\anaconda3\envs\squirrel\lib\site-packages\setuptools\sandbox.py", line 254, in run_setup
    _execfile(setup_script, ns)
  File "C:\Users\tinyr\anaconda3\envs\squirrel\lib\site-packages\setuptools\sandbox.py", line 43, in _execfile
    exec(code, globals, locals)
  File "C:\Users\tinyr\AppData\Local\Temp\easy_install-3subbki0\pandas-1.2.2\setup.py", line 761, in <module>
  File "C:\Users\tinyr\AppData\Local\Temp\easy_install-3subbki0\pandas-1.2.2\setup.py", line 731, in setup_package
  File "C:\Users\tinyr\AppData\Local\Temp\easy_install-3subbki0\pandas-1.2.2\setup.py", line 505, in maybe_cythonize
  File "C:\Users\tinyr\anaconda3\envs\squirrel\lib\site-packages\Cython\Build\Dependencies.py", line 1079, in cythonize
    nthreads, initializer=_init_multiprocessing_helper)
  File "C:\Users\tinyr\anaconda3\envs\squirrel\lib\multiprocessing\context.py", line 119, in Pool
    context=self.get_context())
  File "C:\Users\tinyr\anaconda3\envs\squirrel\lib\multiprocessing\pool.py", line 174, in __init__
    self._repopulate_pool()
  File "C:\Users\tinyr\anaconda3\envs\squirrel\lib\multiprocessing\pool.py", line 239, in _repopulate_pool
    w.start()
  File "C:\Users\tinyr\anaconda3\envs\squirrel\lib\multiprocessing\process.py", line 105, in start
    self._popen = self._Popen(self)
  File "C:\Users\tinyr\anaconda3\envs\squirrel\lib\multiprocessing\context.py", line 322, in _Popen
    return Popen(process_obj)
  File "C:\Users\tinyr\anaconda3\envs\squirrel\lib\multiprocessing\popen_spawn_win32.py", line 43, in __init__
    with open(wfd, 'wb', closefd=True) as to_child:
  File "C:\Users\tinyr\anaconda3\envs\squirrel\lib\site-packages\setuptools\sandbox.py", line 421, in _open
    if mode not in ('r', 'rt', 'rb', 'rU', 'U') and not self._ok(path):
  File "C:\Users\tinyr\anaconda3\envs\squirrel\lib\site-packages\setuptools\sandbox.py", line 432, in _ok
    realpath = os.path.normcase(os.path.realpath(path))
  File "C:\Users\tinyr\anaconda3\envs\squirrel\lib\ntpath.py", line 548, in abspath
    return normpath(_getfullpathname(path))
TypeError: _getfullpathname: path should be string, bytes or os.PathLike, not int

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "object_detection/packages/tf2/setup.py", line 43, in <module>
    python_requires='>3.6',
  File "C:\Users\tinyr\anaconda3\envs\squirrel\lib\site-packages\setuptools\__init__.py", line 153, in setup
    return distutils.core.setup(**attrs)
  File "C:\Users\tinyr\anaconda3\envs\squirrel\lib\distutils\core.py", line 148, in setup
    dist.run_commands()
  File "C:\Users\tinyr\anaconda3\envs\squirrel\lib\distutils\dist.py", line 955, in run_commands
    self.run_command(cmd)
  File "C:\Users\tinyr\anaconda3\envs\squirrel\lib\distutils\dist.py", line 974, in run_command
    cmd_obj.run()
  File "C:\Users\tinyr\anaconda3\envs\squirrel\lib\site-packages\setuptools\command\install.py", line 67, in run
    self.do_egg_install()
  File "C:\Users\tinyr\anaconda3\envs\squirrel\lib\site-packages\setuptools\command\install.py", line 117, in do_egg_install
    cmd.run(show_deprecation=False)
  File "C:\Users\tinyr\anaconda3\envs\squirrel\lib\site-packages\setuptools\command\easy_install.py", line 408, in run
    self.easy_install(spec, not self.no_deps)
  File "C:\Users\tinyr\anaconda3\envs\squirrel\lib\site-packages\setuptools\command\easy_install.py", line 650, in easy_install
    return self.install_item(None, spec, tmpdir, deps, True)
  File "C:\Users\tinyr\anaconda3\envs\squirrel\lib\site-packages\setuptools\command\easy_install.py", line 697, in install_item
    self.process_distribution(spec, dist, deps)
  File "C:\Users\tinyr\anaconda3\envs\squirrel\lib\site-packages\setuptools\command\easy_install.py", line 745, in process_distribution
    [requirement], self.local_index, self.easy_install
  File "C:\Users\tinyr\anaconda3\envs\squirrel\lib\site-packages\pkg_resources\__init__.py", line 768, in resolve
    replace_conflicting=replace_conflicting
  File "C:\Users\tinyr\anaconda3\envs\squirrel\lib\site-packages\pkg_resources\__init__.py", line 1051, in best_match
    return self.obtain(req, installer)
  File "C:\Users\tinyr\anaconda3\envs\squirrel\lib\site-packages\pkg_resources\__init__.py", line 1063, in obtain
    return installer(requirement)
  File "C:\Users\tinyr\anaconda3\envs\squirrel\lib\site-packages\setuptools\command\easy_install.py", line 669, in easy_install
    return self.install_item(spec, dist.location, tmpdir, deps)
  File "C:\Users\tinyr\anaconda3\envs\squirrel\lib\site-packages\setuptools\command\easy_install.py", line 695, in install_item
    dists = self.install_eggs(spec, download, tmpdir)
  File "C:\Users\tinyr\anaconda3\envs\squirrel\lib\site-packages\setuptools\command\easy_install.py", line 890, in install_eggs
    return self.build_and_install(setup_script, setup_base)
  File "C:\Users\tinyr\anaconda3\envs\squirrel\lib\site-packages\setuptools\command\easy_install.py", line 1162, in build_and_install
    self.run_setup(setup_script, setup_base, args)
  File "C:\Users\tinyr\anaconda3\envs\squirrel\lib\site-packages\setuptools\command\easy_install.py", line 1146, in run_setup
    run_setup(setup_script, args)
  File "C:\Users\tinyr\anaconda3\envs\squirrel\lib\site-packages\setuptools\sandbox.py", line 257, in run_setup
    raise
  File "C:\Users\tinyr\anaconda3\envs\squirrel\lib\contextlib.py", line 99, in __exit__
    self.gen.throw(type, value, traceback)
  File "C:\Users\tinyr\anaconda3\envs\squirrel\lib\site-packages\setuptools\sandbox.py", line 193, in setup_context
    yield
  File "C:\Users\tinyr\anaconda3\envs\squirrel\lib\contextlib.py", line 99, in __exit__
    self.gen.throw(type, value, traceback)
  File "C:\Users\tinyr\anaconda3\envs\squirrel\lib\site-packages\setuptools\sandbox.py", line 164, in save_modules
    saved_exc.resume()
  File "C:\Users\tinyr\anaconda3\envs\squirrel\lib\site-packages\setuptools\sandbox.py", line 139, in resume
    raise exc.with_traceback(self._tb)
  File "C:\Users\tinyr\anaconda3\envs\squirrel\lib\site-packages\setuptools\sandbox.py", line 152, in save_modules
    yield saved
  File "C:\Users\tinyr\anaconda3\envs\squirrel\lib\site-packages\setuptools\sandbox.py", line 193, in setup_context
    yield
  File "C:\Users\tinyr\anaconda3\envs\squirrel\lib\site-packages\setuptools\sandbox.py", line 254, in run_setup
    _execfile(setup_script, ns)
  File "C:\Users\tinyr\anaconda3\envs\squirrel\lib\site-packages\setuptools\sandbox.py", line 43, in _execfile
    exec(code, globals, locals)
  File "C:\Users\tinyr\AppData\Local\Temp\easy_install-3subbki0\pandas-1.2.2\setup.py", line 761, in <module>
  File "C:\Users\tinyr\AppData\Local\Temp\easy_install-3subbki0\pandas-1.2.2\setup.py", line 731, in setup_package
  File "C:\Users\tinyr\AppData\Local\Temp\easy_install-3subbki0\pandas-1.2.2\setup.py", line 505, in maybe_cythonize
  File "C:\Users\tinyr\anaconda3\envs\squirrel\lib\site-packages\Cython\Build\Dependencies.py", line 1079, in cythonize
    nthreads, initializer=_init_multiprocessing_helper)
  File "C:\Users\tinyr\anaconda3\envs\squirrel\lib\multiprocessing\context.py", line 119, in Pool
    context=self.get_context())
  File "C:\Users\tinyr\anaconda3\envs\squirrel\lib\multiprocessing\pool.py", line 174, in __init__
    self._repopulate_pool()
  File "C:\Users\tinyr\anaconda3\envs\squirrel\lib\multiprocessing\pool.py", line 239, in _repopulate_pool
    w.start()
  File "C:\Users\tinyr\anaconda3\envs\squirrel\lib\multiprocessing\process.py", line 105, in start
    self._popen = self._Popen(self)
  File "C:\Users\tinyr\anaconda3\envs\squirrel\lib\multiprocessing\context.py", line 322, in _Popen
    return Popen(process_obj)
  File "C:\Users\tinyr\anaconda3\envs\squirrel\lib\multiprocessing\popen_spawn_win32.py", line 43, in __init__
    with open(wfd, 'wb', closefd=True) as to_child:
  File "C:\Users\tinyr\anaconda3\envs\squirrel\lib\site-packages\setuptools\sandbox.py", line 421, in _open
    if mode not in ('r', 'rt', 'rb', 'rU', 'U') and not self._ok(path):
  File "C:\Users\tinyr\anaconda3\envs\squirrel\lib\site-packages\setuptools\sandbox.py", line 432, in _ok
    realpath = os.path.normcase(os.path.realpath(path))
  File "C:\Users\tinyr\anaconda3\envs\squirrel\lib\ntpath.py", line 548, in abspath
    return normpath(_getfullpathname(path))
TypeError: _getfullpathname: path should be string, bytes or os.PathLike, not int
CloneHub94 commented 3 years ago

In the meantime I solve it by using colab for my object detection. But I would like to be able to use my pc as well

aniketbote commented 3 years ago

I have the same problem but when I modify from Python 3.8.5 to 3.6 I get the following error:

Traceback (most recent call last): File "D:\Maurice_Doc\AI\anaconda3\envs\tensorflow\lib\site-packages\tensorflow\python\pywrap_tensorflow.py", line 64, in from tensorflow.python._pywrap_tensorflow_internal import * ImportError: DLL load failed: The specified module could not be found.

During handling of the above exception, another exception occurred:

Traceback (most recent call last): File "model_main_tf2.py", line 31, in import tensorflow.compat.v2 as tf File "D:\Maurice_Doc\AI\anaconda3\envs\tensorflow\lib\site-packages\tensorflowinit.py", line 41, in from tensorflow.python.tools import module_util as _module_util File "D:\Maurice_Doc\AI\anaconda3\envs\tensorflow\lib\site-packages\tensorflow\pythoninit.py", line 39, in from tensorflow.python import pywrap_tensorflow as _pywrap_tensorflow File "D:\Maurice_Doc\AI\anaconda3\envs\tensorflow\lib\site-packages\tensorflow\python\pywrap_tensorflow.py", line 83, in raise ImportError(msg) ImportError: Traceback (most recent call last): File "D:\Maurice_Doc\AI\anaconda3\envs\tensorflow\lib\site-packages\tensorflow\python\pywrap_tensorflow.py", line 64, in from tensorflow.python._pywrap_tensorflow_internal import * ImportError: DLL load failed: The specified module could not be found.

Failed to load the native TensorFlow runtime.

See https://www.tensorflow.org/install/errors

for some common reasons and solutions. Include the entire stack trace above this error message when asking for help.

Anybody know how to fix this issue?

Have you installed tensorflow correctly? This may be due to C++ executables not present in your system.

CloneHub94 commented 3 years ago

These are the results that I get when I lookup my tensorflow installation. Is there something that is missing? image

aniketbote commented 3 years ago

These are the results that I get when I lookup my tensorflow installation. Is there something that is missing? image

python -c "import tensorflow as tf;print(tf.reduce_sum(tf.random.normal([1000, 1000])))" Can you check if this works?
If this works tensorflow is correctly installed. Else there is something wrong with tensorflow installation rather than object detection API.

CloneHub94 commented 3 years ago

This is the results I get:

image

rav-en commented 3 years ago

This is the results I get:

image

I have the same error as what you are getting. im using numpy 1.20.0 with Tensorflow 2.4.1

I'm convinced that numpy is the problem but im honestly too new at training models and using TF etc. This is the tutorial ive been following https://tensorflow-object-detection-api-tutorial.readthedocs.io/en/latest/training.html

Have you had any luck with solving this issue?

CloneHub94 commented 3 years ago

No, unfortunatly. Just using colab for the moment and hopping that the inference part of tensorflow object detection does work on my computer

dademiller360 commented 3 years ago

This is the results I get: image

I have the same error as what you are getting. im using numpy 1.20.0 with Tensorflow 2.4.1

I'm convinced that numpy is the problem but im honestly too new at training models and using TF etc. This is the tutorial ive been following https://tensorflow-object-detection-api-tutorial.readthedocs.io/en/latest/training.html

Have you had any luck with solving this issue?

Try to downgrade numpy to numpy 1.19.5 pypi_0 pypi

pip install numpy==1.19.5

after that, check with conda list which numpy version you have installed

CloneHub94 commented 3 years ago

@dademiller360 The problem is if we do that we get the error mentioned earlier ValueError: numpy.ndarray size changed, may indicate binary incompatibility. Expected 88 from C header, got 80 from. Some people are able to solve this by downgrading to Python 3.6. But if I do that I get the error from my first post:

Traceback (most recent call last): File "D:\Maurice_Doc\AI\anaconda3\envs\tensorflow\lib\site-packages\tensorflow\python\pywrap_tensorflow.py", line 64, in from tensorflow.python._pywrap_tensorflow_internal import * ImportError: DLL load failed: The specified module could not be found.

During handling of the above exception, another exception occurred:

Traceback (most recent call last): File "model_main_tf2.py", line 31, in import tensorflow.compat.v2 as tf File "D:\Maurice_Doc\AI\anaconda3\envs\tensorflow\lib\site-packages\tensorflowinit.py", line 41, in from tensorflow.python.tools import module_util as module_util File "D:\Maurice_Doc\AI\anaconda3\envs\tensorflow\lib\site-packages\tensorflow\python_init.py", line 39, in from tensorflow.python import pywrap_tensorflow as _pywrap_tensorflow File "D:\Maurice_Doc\AI\anaconda3\envs\tensorflow\lib\site-packages\tensorflow\python\pywrap_tensorflow.py", line 83, in raise ImportError(msg) ImportError: Traceback (most recent call last): File "D:\Maurice_Doc\AI\anaconda3\envs\tensorflow\lib\site-packages\tensorflow\python\pywrap_tensorflow.py", line 64, in from tensorflow.python._pywrap_tensorflow_internal import * ImportError: DLL load failed: The specified module could not be found.

Failed to load the native TensorFlow runtime.

See https://www.tensorflow.org/install/errors

for some common reasons and solutions. Include the entire stack trace above this error message when asking for help.

glemarivero commented 3 years ago

Hi @aniketbote I posted this answer in Stack Overflow: https://stackoverflow.com/questions/66373169/tensorflow-2-object-detection-api-numpy-version-errors/66486051#66486051

I had this same issue:

NotImplementedError: Cannot convert a symbolic Tensor (cond_2/strided_slice:0) to a numpy array. This error may indicate that you're trying to pass a Tensor to a NumPy call, which is not supported

The problem was fixed by changing np.prod for reduce_prod in this function https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/ops/array_ops.py

def _constant_if_small(value, shape, dtype, name):
  try:
    if np.prod(shape) < 1000:
      return constant(value, shape=shape, dtype=dtype, name=name)
  except TypeError:
    # Happens when shape is a Tensor, list with Tensor elements, etc.
    pass
  return None

Note that you need to import reduce_prod at the top of the file:

from tensorflow.math import reduce_prod
redradist commented 3 years ago

@glemarivero

Hi @aniketbote I posted this answer in Stack Overflow: https://stackoverflow.com/questions/66373169/tensorflow-2-object-detection-api-numpy-version-errors/66486051#66486051

I had this same issue:

NotImplementedError: Cannot convert a symbolic Tensor (cond_2/strided_slice:0) to a numpy array. This error may indicate that you're trying to pass a Tensor to a NumPy call, which is not supported

The problem was fixed by changing np.prod for reduce_prod in this function https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/ops/array_ops.py

def _constant_if_small(value, shape, dtype, name):
  try:
    if np.prod(shape) < 1000:
      return constant(value, shape=shape, dtype=dtype, name=name)
  except TypeError:
    # Happens when shape is a Tensor, list with Tensor elements, etc.
    pass
  return None

Note that you need to import reduce_prod at the top of the file:

from tensorflow.math import reduce_prod

I was able to fix issue like you described but by importing reduc_prod as:

from tensorflow.python.ops.math_ops import reduce_prod
...

Seems like it is a bug in tensorflow

Pipickin commented 3 years ago

@glemarivero

Hi @aniketbote I posted this answer in Stack Overflow: https://stackoverflow.com/questions/66373169/tensorflow-2-object-detection-api-numpy-version-errors/66486051#66486051 I had this same issue:

NotImplementedError: Cannot convert a symbolic Tensor (cond_2/strided_slice:0) to a numpy array. This error may indicate that you're trying to pass a Tensor to a NumPy call, which is not supported

The problem was fixed by changing np.prod for reduce_prod in this function https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/ops/array_ops.py

def _constant_if_small(value, shape, dtype, name):
  try:
    if np.prod(shape) < 1000:
      return constant(value, shape=shape, dtype=dtype, name=name)
  except TypeError:
    # Happens when shape is a Tensor, list with Tensor elements, etc.
    pass
  return None

Note that you need to import reduce_prod at the top of the file:

from tensorflow.math import reduce_prod

I was able to fix issue like you described but by importing reduc_prod as:

from tensorflow.python.ops.math_ops import reduce_prod
...

Seems like it is a bug in tensorflow

Hello, I'm new to tf object detection. I had the same error but after I changed import this error was gone out. But I got new error:

Fatal Python error: Aborted

Thread 0x00007f505b7fe700 (most recent call first): File "/usr/lib/python3.7/threading.py", line 296 in wait File "/usr/lib/python3.7/queue.py", line 170 in get File "/home/vlad/.virtualenvs/tf1_obj_det_p37/lib/python3.7/site-packages/tensorflow_core/python/summary/writer/event_file_writer.py", line 159 in run File "/usr/lib/python3.7/threading.py", line 926 in _bootstrap_inner File "/usr/lib/python3.7/threading.py", line 890 in _bootstrap

Current thread 0x00007f50f699f740 (most recent call first): File "/home/vlad/.virtualenvs/tf1_obj_det_p37/lib/python3.7/site-packages/tensorflow_core/python/client/session.py", line 699 in init File "/home/vlad/.virtualenvs/tf1_obj_det_p37/lib/python3.7/site-packages/tensorflow_core/python/client/session.py", line 1585 in init File "/home/vlad/.virtualenvs/tf1_obj_det_p37/lib/python3.7/site-packages/tensorflow_core/python/training/session_manager.py", line 194 in _restore_checkpoint File "/home/vlad/.virtualenvs/tf1_obj_det_p37/lib/python3.7/site-packages/tensorflow_core/python/training/session_manager.py", line 290 in prepare_session File "/home/vlad/.virtualenvs/tf1_obj_det_p37/lib/python3.7/site-packages/tensorflow_core/python/training/supervisor.py", line 734 in prepare_or_wait_for_session File "/home/vlad/.virtualenvs/tf1_obj_det_p37/lib/python3.7/site-packages/tensorflow_core/python/training/supervisor.py", line 1003 in managed_session File "/usr/lib/python3.7/contextlib.py", line 112 in enter File "/home/vlad/.virtualenvs/tf1_obj_det_p37/lib/python3.7/site-packages/tf_slim/learning.py", line 745 in train File "/home/vlad/.virtualenvs/tf1_obj_det_p37/lib/python3.7/site-packages/object_detection-0.1-py3.7.egg/object_detection/legacy/trainer.py", line 415 in train File "train.py", line 182 in main File "/home/vlad/.virtualenvs/tf1_obj_det_p37/lib/python3.7/site-packages/tensorflow_core/python/util/deprecation.py", line 324 in new_func File "/home/vlad/.virtualenvs/tf1_obj_det_p37/lib/python3.7/site-packages/absl/app.py", line 251 in _run_main File "/home/vlad/.virtualenvs/tf1_obj_det_p37/lib/python3.7/site-packages/absl/app.py", line 303 in run File "/home/vlad/.virtualenvs/tf1_obj_det_p37/lib/python3.7/site-packages/tensorflow_core/python/platform/app.py", line 40 in run File "train.py", line 186 in Aborted (core dumped)

Did you face this probled before? Or do you have any idea about this error?

Pipickin commented 3 years ago

@glemarivero

Hi @aniketbote I posted this answer in Stack Overflow: https://stackoverflow.com/questions/66373169/tensorflow-2-object-detection-api-numpy-version-errors/66486051#66486051 I had this same issue:

NotImplementedError: Cannot convert a symbolic Tensor (cond_2/strided_slice:0) to a numpy array. This error may indicate that you're trying to pass a Tensor to a NumPy call, which is not supported

The problem was fixed by changing np.prod for reduce_prod in this function https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/ops/array_ops.py

def _constant_if_small(value, shape, dtype, name):
  try:
    if np.prod(shape) < 1000:
      return constant(value, shape=shape, dtype=dtype, name=name)
  except TypeError:
    # Happens when shape is a Tensor, list with Tensor elements, etc.
    pass
  return None

Note that you need to import reduce_prod at the top of the file:

from tensorflow.math import reduce_prod

I was able to fix issue like you described but by importing reduc_prod as:

from tensorflow.python.ops.math_ops import reduce_prod
...

Seems like it is a bug in tensorflow

Hello, I'm new to tf object detection. I had the same error but after I changed import this error was gone out. But I got new error:

Fatal Python error: Aborted

Thread 0x00007f505b7fe700 (most recent call first): File "/usr/lib/python3.7/threading.py", line 296 in wait File "/usr/lib/python3.7/queue.py", line 170 in get File "/home/vlad/.virtualenvs/tf1_obj_det_p37/lib/python3.7/site-packages/tensorflow_core/python/summary/writer/event_file_writer.py", line 159 in run File "/usr/lib/python3.7/threading.py", line 926 in _bootstrap_inner File "/usr/lib/python3.7/threading.py", line 890 in _bootstrap

Current thread 0x00007f50f699f740 (most recent call first): File "/home/vlad/.virtualenvs/tf1_obj_det_p37/lib/python3.7/site-packages/tensorflow_core/python/client/session.py", line 699 in init File "/home/vlad/.virtualenvs/tf1_obj_det_p37/lib/python3.7/site-packages/tensorflow_core/python/client/session.py", line 1585 in init File "/home/vlad/.virtualenvs/tf1_obj_det_p37/lib/python3.7/site-packages/tensorflow_core/python/training/session_manager.py", line 194 in _restore_checkpoint File "/home/vlad/.virtualenvs/tf1_obj_det_p37/lib/python3.7/site-packages/tensorflow_core/python/training/session_manager.py", line 290 in prepare_session File "/home/vlad/.virtualenvs/tf1_obj_det_p37/lib/python3.7/site-packages/tensorflow_core/python/training/supervisor.py", line 734 in prepare_or_wait_for_session File "/home/vlad/.virtualenvs/tf1_obj_det_p37/lib/python3.7/site-packages/tensorflow_core/python/training/supervisor.py", line 1003 in managed_session File "/usr/lib/python3.7/contextlib.py", line 112 in enter File "/home/vlad/.virtualenvs/tf1_obj_det_p37/lib/python3.7/site-packages/tf_slim/learning.py", line 745 in train File "/home/vlad/.virtualenvs/tf1_obj_det_p37/lib/python3.7/site-packages/object_detection-0.1-py3.7.egg/object_detection/legacy/trainer.py", line 415 in train File "train.py", line 182 in main File "/home/vlad/.virtualenvs/tf1_obj_det_p37/lib/python3.7/site-packages/tensorflow_core/python/util/deprecation.py", line 324 in new_func File "/home/vlad/.virtualenvs/tf1_obj_det_p37/lib/python3.7/site-packages/absl/app.py", line 251 in _run_main File "/home/vlad/.virtualenvs/tf1_obj_det_p37/lib/python3.7/site-packages/absl/app.py", line 303 in run File "/home/vlad/.virtualenvs/tf1_obj_det_p37/lib/python3.7/site-packages/tensorflow_core/python/platform/app.py", line 40 in run File "train.py", line 186 in Aborted (core dumped)

Did you face this probled before? Or do you have any idea about this error?

I fixed this by adding CUDA_VISIBLE_DEVICES=""

glemarivero commented 3 years ago

@Pipickin But then you are running the training on CPU and not GPU. Were you able to train anything else? Just want to know if CUDA is setup correctly.

hayk314 commented 3 years ago

I had the same issue with newly installed tensorflow 2.2.0 and python 3.8.5. Installing tensorflow with pip will install numpy version 1.20.2. You can then downgrade numpy to version 1.18.4 (just uninstall with pip and install that particular version). Then everything works perfectly fine.

Pipickin commented 3 years ago

@glemarivero Hello. I trained my model via google colab, because I have not enough Capability on my GPU (I didn't even try do it). I was wondering why I got the error above

BadMachine commented 3 years ago

It helps to uninstall pycocotools and reinstall it right after

bergen288 commented 3 years ago

I had the same error. It is fixed per glemarivero's workaround. array_ops.py is inside ```C:\Users\xxxxxx\AppData\Local\Programs\Python\Python38\Lib\site-packages\tensorflow\python\ops'''. Somehow, I have to do it this way:

import tensorflow as tf
......
......
......
    if tf.math.reduce_prod(shape) < 1000:
Xinxi-Zhang commented 3 years ago

Try to downgrade numpy to numpy 1.19.5 pypi_0 pypi

pip install numpy==1.19.5 this one fucking work

R4j4n commented 3 years ago

just installed python ==3.6.5 and it worked for me

Blboun3 commented 3 years ago

I have same error, Python 3.8.8, NumPy 1.20.2, TensorFlow 2.5.0

I have newest Arch linux with i3wm window manager, my processor is i5 8400 and my GPU is GTX 1070 8GB, I have 16GB of RAM AND I am trying to follow this tutorial and my .ipynb file is uploaded. (renamed to .txt, just rename it to .ipynb) Untitled.txt

My error:


NotImplementedError                       Traceback (most recent call last)
<ipython-input-16-29eef80a8637> in <module>
      1 model = Sequential()
----> 2 model.add(LSTM(units=96, return_sequences=True, input_shape=(x_train.shape[1], 1)))
      3 model.add(Dropout(0.2))
      4 model.add(LSTM(units=96,return_sequences=True))
      5 model.add(Dropout(0.2))

~/anaconda3/lib/python3.8/site-packages/tensorflow/python/training/tracking/base.py in _method_wrapper(self, *args, **kwargs)
    520     self._self_setattr_tracking = False  # pylint: disable=protected-access
    521     try:
--> 522       result = method(self, *args, **kwargs)
    523     finally:
    524       self._self_setattr_tracking = previous_value  # pylint: disable=protected-access

~/anaconda3/lib/python3.8/site-packages/keras/engine/sequential.py in add(self, layer)
    206           # and create the node connecting the current layer
    207           # to the input layer we just created.
--> 208           layer(x)
    209           set_inputs = True
    210 

~/anaconda3/lib/python3.8/site-packages/keras/layers/recurrent.py in __call__(self, inputs, initial_state, constants, **kwargs)
    658 
    659     if initial_state is None and constants is None:
--> 660       return super(RNN, self).__call__(inputs, **kwargs)
    661 
    662     # If any of `initial_state` or `constants` are specified and are Keras

~/anaconda3/lib/python3.8/site-packages/keras/engine/base_layer.py in __call__(self, *args, **kwargs)
    943     # >> model = tf.keras.Model(inputs, outputs)
    944     if _in_functional_construction_mode(self, inputs, args, kwargs, input_list):
--> 945       return self._functional_construction_call(inputs, args, kwargs,
    946                                                 input_list)
    947 

~/anaconda3/lib/python3.8/site-packages/keras/engine/base_layer.py in _functional_construction_call(self, inputs, args, kwargs, input_list)
   1081         layer=self, inputs=inputs, build_graph=True, training=training_value):
   1082       # Check input assumptions set after layer building, e.g. input shape.
-> 1083       outputs = self._keras_tensor_symbolic_call(
   1084           inputs, input_masks, args, kwargs)
   1085 

~/anaconda3/lib/python3.8/site-packages/keras/engine/base_layer.py in _keras_tensor_symbolic_call(self, inputs, input_masks, args, kwargs)
    814       return tf.nest.map_structure(keras_tensor.KerasTensor, output_signature)
    815     else:
--> 816       return self._infer_output_signature(inputs, args, kwargs, input_masks)
    817 
    818   def _infer_output_signature(self, inputs, args, kwargs, input_masks):

~/anaconda3/lib/python3.8/site-packages/keras/engine/base_layer.py in _infer_output_signature(self, inputs, args, kwargs, input_masks)
    854           self._maybe_build(inputs)
    855           inputs = self._maybe_cast_inputs(inputs)
--> 856           outputs = call_fn(inputs, *args, **kwargs)
    857 
    858         self._handle_activity_regularization(inputs, outputs)

~/anaconda3/lib/python3.8/site-packages/keras/layers/recurrent_v2.py in call(self, inputs, mask, training, initial_state)
   1137 
   1138     # LSTM does not support constants. Ignore it during process.
-> 1139     inputs, initial_state, _ = self._process_inputs(inputs, initial_state, None)
   1140 
   1141     if isinstance(mask, list):

~/anaconda3/lib/python3.8/site-packages/keras/layers/recurrent.py in _process_inputs(self, inputs, initial_state, constants)
    858         initial_state = self.states
    859     elif initial_state is None:
--> 860       initial_state = self.get_initial_state(inputs)
    861 
    862     if len(initial_state) != len(self.states):

~/anaconda3/lib/python3.8/site-packages/keras/layers/recurrent.py in get_initial_state(self, inputs)
    640     dtype = inputs.dtype
    641     if get_initial_state_fn:
--> 642       init_state = get_initial_state_fn(
    643           inputs=None, batch_size=batch_size, dtype=dtype)
    644     else:

~/anaconda3/lib/python3.8/site-packages/keras/layers/recurrent.py in get_initial_state(self, inputs, batch_size, dtype)
   2506 
   2507   def get_initial_state(self, inputs=None, batch_size=None, dtype=None):
-> 2508     return list(_generate_zero_filled_state_for_cell(
   2509         self, inputs, batch_size, dtype))
   2510 

~/anaconda3/lib/python3.8/site-packages/keras/layers/recurrent.py in _generate_zero_filled_state_for_cell(cell, inputs, batch_size, dtype)
   2988     batch_size = tf.compat.v1.shape(inputs)[0]
   2989     dtype = inputs.dtype
-> 2990   return _generate_zero_filled_state(batch_size, cell.state_size, dtype)
   2991 
   2992 

~/anaconda3/lib/python3.8/site-packages/keras/layers/recurrent.py in _generate_zero_filled_state(batch_size_tensor, state_size, dtype)
   3004 
   3005   if tf.nest.is_nested(state_size):
-> 3006     return tf.nest.map_structure(create_zeros, state_size)
   3007   else:
   3008     return create_zeros(state_size)

~/anaconda3/lib/python3.8/site-packages/tensorflow/python/util/nest.py in map_structure(func, *structure, **kwargs)
    865 
    866   return pack_sequence_as(
--> 867       structure[0], [func(*x) for x in entries],
    868       expand_composites=expand_composites)
    869 

~/anaconda3/lib/python3.8/site-packages/tensorflow/python/util/nest.py in <listcomp>(.0)
    865 
    866   return pack_sequence_as(
--> 867       structure[0], [func(*x) for x in entries],
    868       expand_composites=expand_composites)
    869 

~/anaconda3/lib/python3.8/site-packages/keras/layers/recurrent.py in create_zeros(unnested_state_size)
   3001     flat_dims = tf.TensorShape(unnested_state_size).as_list()
   3002     init_state_size = [batch_size_tensor] + flat_dims
-> 3003     return tf.zeros(init_state_size, dtype=dtype)
   3004 
   3005   if tf.nest.is_nested(state_size):

~/anaconda3/lib/python3.8/site-packages/tensorflow/python/util/dispatch.py in wrapper(*args, **kwargs)
    204     """Call target, and fall back on dispatchers if there is a TypeError."""
    205     try:
--> 206       return target(*args, **kwargs)
    207     except (TypeError, ValueError):
    208       # Note: convert_to_eager_tensor currently raises a ValueError, not a

~/anaconda3/lib/python3.8/site-packages/tensorflow/python/ops/array_ops.py in wrapped(*args, **kwargs)
   2909 
   2910   def wrapped(*args, **kwargs):
-> 2911     tensor = fun(*args, **kwargs)
   2912     tensor._is_zeros_tensor = True
   2913     return tensor

~/anaconda3/lib/python3.8/site-packages/tensorflow/python/ops/array_ops.py in zeros(shape, dtype, name)
   2958           # Create a constant if it won't be very big. Otherwise create a fill
   2959           # op to prevent serialized GraphDefs from becoming too large.
-> 2960           output = _constant_if_small(zero, shape, dtype, name)
   2961           if output is not None:
   2962             return output

~/anaconda3/lib/python3.8/site-packages/tensorflow/python/ops/array_ops.py in _constant_if_small(value, shape, dtype, name)
   2894 def _constant_if_small(value, shape, dtype, name):
   2895   try:
-> 2896     if np.prod(shape) < 1000:
   2897       return constant(value, shape=shape, dtype=dtype, name=name)
   2898   except TypeError:

<__array_function__ internals> in prod(*args, **kwargs)

~/anaconda3/lib/python3.8/site-packages/numpy/core/fromnumeric.py in prod(a, axis, dtype, out, keepdims, initial, where)
   3028     10
   3029     """
-> 3030     return _wrapreduction(a, np.multiply, 'prod', axis, dtype, out,
   3031                           keepdims=keepdims, initial=initial, where=where)
   3032 

~/anaconda3/lib/python3.8/site-packages/numpy/core/fromnumeric.py in _wrapreduction(obj, ufunc, method, axis, dtype, out, **kwargs)
     85                 return reduction(axis=axis, out=out, **passkwargs)
     86 
---> 87     return ufunc.reduce(obj, axis, dtype, out, **passkwargs)
     88 
     89 

~/anaconda3/lib/python3.8/site-packages/tensorflow/python/framework/ops.py in __array__(self)
    865 
    866   def __array__(self):
--> 867     raise NotImplementedError(
    868         "Cannot convert a symbolic Tensor ({}) to a numpy array."
    869         " This error may indicate that you're trying to pass a Tensor to"

NotImplementedError: Cannot convert a symbolic Tensor (lstm_1/strided_slice:0) to a numpy array. This error may indicate that you're trying to pass a Tensor to a NumPy call, which is not supported```
ankita0204 commented 3 years ago

it can be easily fixed by reinstalling pycocotools

Blboun3 commented 3 years ago

I fixed it by downgrading NumPy to 1.16 (I guess .4 ?)

no-trick-pony commented 3 years ago

This is not "fixed" by downgrading your whole Python or numpy installation - other libraries depend on having an up-to-date version of numpy installed (pandas-ta being one of them). numpy 1.20 is now half a year old and Tensorflow should start supporting it.

Amir22010 commented 3 years ago

i was able to fix issue Go to C:\Users\khana\miniconda3\envs\tutorialenv\Lib\site-packages\tensorflow\python\ops\array_ops.py Add - from tensorflow.python.ops.math_ops import reduce_prod Change to this `def _constant_if_small(value, shape, dtype, name): try: if reduce_prod(shape) < 1000: return constant(value, shape=shape, dtype=dtype, name=name) except TypeError:

Happens when shape is a Tensor, list with Tensor elements, etc.

pass

return None`

tensorflow - 2.5.0 python - 3.7.0

btburton42 commented 3 years ago

Try to downgrade numpy to numpy 1.19.5 pypi_0 pypi

pip install numpy==1.19.5 this one fucking work

100% this. Didn't have to touch Python or TF.

YASHGUPTA2611 commented 3 years ago

Just downgrade the Numpy version to 1.19.5 and you are good to go.

VirajDeshwal commented 3 years ago

Downgrading Numpy won't fix for some of the users. Writing down the errors with Numpy 1.21 and 1.19 below and my fix. Error with Python 3.8 and Numpy 1.21 NotImplementedError: Cannot convert a symbolic Tensor (cond_3/strided_slice:0) to a numpy array. This error may indicate that you're trying to pass a Tensor to a NumPy call, which is not supported

Error with Python 3.8 and Numpy 1.19.5 ValueError: numpy.ndarray size changed, may indicate binary incompatibility. Expected 88 from C header, got 80 from PyObject

My solution is: Downgrade the python to 3.6 and install all your packages with upgraded pip. I made a new conda environment conda create --name tf2 python=3.6 pip install --upgrade pip pip install tensorflow and installed the tensorflow and numpy using pip and it worked for me. Hope it helps.

YASHGUPTA2611 commented 3 years ago

Hi, @VirajDeshwal Yes you are right, not working in a new environment can also cause this error. But if anyone is working on a new project, they have to create a new environment otherwise these error will occur.

VirajDeshwal commented 3 years ago

@YASHGUPTA2611 Try making a new conda env with python=3.6 and install TensorFlow. Hope it will work for you.

tomrh16 commented 3 years ago

its now basically impossible to get numpy 1.19.5 to work with any other python package. PLEASE can this be fixed so we can use numpy 1.20.2 or greater?

ankita0204 commented 3 years ago

@tomrh16 it can be easily fixed by reinstalling pycocotools

YASHGUPTA2611 commented 3 years ago

Hi, @tomrh16 you can easily install any version of Numpy using Anaconda Navigator c6b895be-960a-4429-ad17-520a06ccc34c

joshua-cogliati-inl commented 3 years ago

i was able to fix issue Go to C:\Users\khana\miniconda3\envs\tutorialenv\Lib\site-packages\tensorflow\python\ops\array_ops.py Add - from tensorflow.python.ops.math_ops import reduce_prod Change to this def _constant_if_small(value, shape, dtype, name): try: if reduce_prod(shape) < 1000: return constant(value, shape=shape, dtype=dtype, name=name) except TypeError: # Happens when shape is a Tensor, list with Tensor elements, etc. pass return None

tensorflow - 2.5.0 python - 3.7.0

This also fixes it for me (Python 3.7.10, tensorflow 2.4.1, numpy 1.20). @Amir22010's fix as a patch:

--- tensorflow/python/ops/array_ops.py.orig 2021-07-06 16:14:00.000000000 -0600
+++ tensorflow/python/ops/array_ops.py  2021-07-06 16:15:07.000000000 -0600
@@ -39,6 +39,7 @@
 # pylint: disable=wildcard-import
 from tensorflow.python.ops.gen_array_ops import *
 from tensorflow.python.ops.gen_array_ops import reverse_v2 as reverse  # pylint: disable=unused-import
+from tensorflow.python.ops.math_ops import reduce_prod
 from tensorflow.python.types import core
 from tensorflow.python.util import deprecation
 from tensorflow.python.util import dispatch
@@ -2801,7 +2802,7 @@

 def _constant_if_small(value, shape, dtype, name):
   try:
-    if np.prod(shape) < 1000:
+    if reduce_prod(shape) < 1000:
       return constant(value, shape=shape, dtype=dtype, name=name)
   except TypeError:
     # Happens when shape is a Tensor, list with Tensor elements, etc.
joshua-cogliati-inl commented 3 years ago

There is an open pull request to fix this: https://github.com/tensorflow/tensorflow/pull/48935

athenasaurav commented 3 years ago

i was able to fix issue Go to C:\Users\khana\miniconda3\envs\tutorialenv\Lib\site-packages\tensorflow\python\ops\array_ops.py Add - from tensorflow.python.ops.math_ops import reduce_prod Change to this def _constant_if_small(value, shape, dtype, name): try: if reduce_prod(shape) < 1000: return constant(value, shape=shape, dtype=dtype, name=name) except TypeError: # Happens when shape is a Tensor, list with Tensor elements, etc. pass return None

tensorflow - 2.5.0 python - 3.7.0

This is the perfect solution without downgrading numpy or any things.

In case you are in base conda env then the Path will be like this :

C:\Users\USERNAME\anaconda3\Lib\site-packages\tensorflow\python\ops\array_ops.py

Just open the file in any editor Add the following lines in starting to import

from tensorflow.python.ops.math_ops import reduce_prod

then search for def _constant_if_small and then replace the entire function to this :

def _constant_if_small(value, shape, dtype, name):
  try:
    if reduce_prod(shape) < 1000:
      return constant(value, shape=shape, dtype=dtype, name=name)
  except TypeError:
    # Happens when shape is a Tensor, list with Tensor elements, etc.
    pass
  return None

P.S : My python version : 3.8.5, Numpy Version : 1.21.0 & tensorflow version : '2.4.0'

mikeperalta1 commented 3 years ago

+1 Editing array_ops.py as shown above worked for me.

Genius!

jessicametzger commented 3 years ago

I got the same NotImplementedError after making various minor changes to my conda environment. It was resolved when I uninstalled and reinstalled the tensorflow research module:

$ pwd
/path/to/my/dir/Tensorflow/models/research/
$ python -m pip install . --force-reinstall
claytonfk commented 3 years ago

I have got the same problem with TensorFlow 2.5 and different versions of Python.
I could only solve by editing array_ops.py as proposed by joshua-cogliati-inl above.

mijkami commented 3 years ago

With:

I had

NotImplementedError: Cannot convert a symbolic Tensor (simple_rnn/strided_slice:0) to a numpy array. This error may indicate that you're trying to pass a Tensor to a NumPy call, which is not supported

When trying to change NumPy version (downgrade to 1.19.5), this error was quickly followed by: ValueError: numpy.ndarray size changed, may indicate binary incompatibility. Expected 88 from C header, got 80 from PyObject

Which was solved by reinstalling tensorflow and reinstall AND reinstalling numpy (tensorflow changed numpy to version 1.19.5 at install), ending with TF 2.6.0 and NP 1.21.1 fresh installs

dh7hong commented 3 years ago

Try making a new python environment and install tensorflow==2.4.0

sowmyyav commented 3 years ago

With:

  • Tensorflow 2.6.0
  • Numpy 1.21.1

I had

NotImplementedError: Cannot convert a symbolic Tensor (simple_rnn/strided_slice:0) to a numpy array. This error may indicate that you're trying to pass a Tensor to a NumPy call, which is not supported

When trying to change NumPy version (downgrade to 1.19.5), this error was quickly followed by: ValueError: numpy.ndarray size changed, may indicate binary incompatibility. Expected 88 from C header, got 80 from PyObject

Which was solved by reinstalling tensorflow and reinstall AND reinstalling numpy (tensorflow changed numpy to version 1.19.5 at install), ending with TF 2.6.0 and NP 1.21.1 fresh installs

  • pip uninstall tensorflow
  • pip install tensorflow
  • pip uninstall numpy
  • pip install numpy

Thanks mijkami,

This solution worked for me by uninstall and reinstalling tensorflow and numpy.

My error on with GPU GTX 1080, CUDA 11.2, Python 3.8, tensorflow 2.5 and numpy 1.19.5 (based on the suggestions above on downgrading numpy using pip install --user numpy==1.19.5) was

NotImplementedError: Cannot convert a symbolic Tensor (bidirectional/forward_lstm/strided_slice:0) to a numpy array. This error may indicate that you're trying to pass a Tensor to a NumPy call, which is not supported.

After re-installation using

pip uninstall tensorflow pip install --upgrade tensorflow (source: https://www.tensorflow.org/install/pip#virtual-environment-install) pip uninstall numpy pip install numpy

I got Tensor Flow Version: 2.6.0, Keras Version: 2.6.0, Python 3.8.11, Numpy 1.21.2, GPU is available. Solved the error and model (bidirectional LSTM) is fitting.

Reflectioner commented 3 years ago

i was able to fix issue Go to C:\Users\khana\miniconda3\envs\tutorialenv\Lib\site-packages\tensorflow\python\ops\array_ops.py Add - from tensorflow.python.ops.math_ops import reduce_prod Change to this def _constant_if_small(value, shape, dtype, name): try: if reduce_prod(shape) < 1000: return constant(value, shape=shape, dtype=dtype, name=name) except TypeError: # Happens when shape is a Tensor, list with Tensor elements, etc. pass return None tensorflow - 2.5.0 python - 3.7.0

This is the perfect solution without downgrading numpy or any things.

In case you are in base conda env then the Path will be like this :

C:\Users\USERNAME\anaconda3\Lib\site-packages\tensorflow\python\ops\array_ops.py

Just open the file in any editor Add the following lines in starting to import

from tensorflow.python.ops.math_ops import reduce_prod

then search for def _constant_if_small and then replace the entire function to this :

def _constant_if_small(value, shape, dtype, name):
  try:
    if reduce_prod(shape) < 1000:
      return constant(value, shape=shape, dtype=dtype, name=name)
  except TypeError:
    # Happens when shape is a Tensor, list with Tensor elements, etc.
    pass
  return None

P.S : My python version : 3.8.5, Numpy Version : 1.21.0 & tensorflow version : '2.4.0'

perfect, thanks a lot man, just a little touch I had to do: instead of reduce_prod I had to add: import tensorflow as tf and: instead of reduce_prod -> tf.math.reduce_prod other then that everything is perfect didn't need to downgrade or install anything

fudingyu commented 3 years ago

i was able to fix issue Go to C:\Users\khana\miniconda3\envs\tutorialenv\Lib\site-packages\tensorflow\python\ops\array_ops.py Add - from tensorflow.python.ops.math_ops import reduce_prod Change to this def _constant_if_small(value, shape, dtype, name): try: if reduce_prod(shape) < 1000: return constant(value, shape=shape, dtype=dtype, name=name) except TypeError: # Happens when shape is a Tensor, list with Tensor elements, etc. pass return None tensorflow - 2.5.0 python - 3.7.0

This is the perfect solution without downgrading numpy or any things.

In case you are in base conda env then the Path will be like this :

C:\Users\USERNAME\anaconda3\Lib\site-packages\tensorflow\python\ops\array_ops.py

Just open the file in any editor Add the following lines in starting to import

from tensorflow.python.ops.math_ops import reduce_prod

then search for def _constant_if_small and then replace the entire function to this :

def _constant_if_small(value, shape, dtype, name):
  try:
    if reduce_prod(shape) < 1000:
      return constant(value, shape=shape, dtype=dtype, name=name)
  except TypeError:
    # Happens when shape is a Tensor, list with Tensor elements, etc.
    pass
  return None

P.S : My python version : 3.8.5, Numpy Version : 1.21.0 & tensorflow version : '2.4.0'

Thank you, this method has worked.

@Reflectioner your solution also worked.

ljubantomic01 commented 3 years ago

i was able to fix issue Go to C:\Users\khana\miniconda3\envs\tutorialenv\Lib\site-packages\tensorflow\python\ops\array_ops.py Add - from tensorflow.python.ops.math_ops import reduce_prod Change to this def _constant_if_small(value, shape, dtype, name): try: if reduce_prod(shape) < 1000: return constant(value, shape=shape, dtype=dtype, name=name) except TypeError: # Happens when shape is a Tensor, list with Tensor elements, etc. pass return None tensorflow - 2.5.0 python - 3.7.0

This is the perfect solution without downgrading numpy or any things. In case you are in base conda env then the Path will be like this :

C:\Users\USERNAME\anaconda3\Lib\site-packages\tensorflow\python\ops\array_ops.py

Just open the file in any editor Add the following lines in starting to import

from tensorflow.python.ops.math_ops import reduce_prod

then search for def _constant_if_small and then replace the entire function to this :

def _constant_if_small(value, shape, dtype, name):
  try:
    if reduce_prod(shape) < 1000:
      return constant(value, shape=shape, dtype=dtype, name=name)
  except TypeError:
    # Happens when shape is a Tensor, list with Tensor elements, etc.
    pass
  return None

P.S : My python version : 3.8.5, Numpy Version : 1.21.0 & tensorflow version : '2.4.0'

perfect, thanks a lot man, just a little touch I had to do: instead of reduce_prod I had to add: import tensorflow as tf and: instead of reduce_prod -> tf.math.reduce_prod other then that everything is perfect didn't need to downgrade or install anything

It worked 👍 ! In my case array_ops.py was in .../site-packages/tensorflow-core/python/