rockchip-linux / rknn-toolkit

BSD 3-Clause "New" or "Revised" License
808 stars 173 forks source link

1.7.0 onnx量化出错 #84

Open wswday opened 3 years ago

wswday commented 3 years ago

dataset一张图片没问题,图片多久出错

done --> Building model W:tensorflow:From /home/xyz/miniconda3/envs/rknn/lib/python3.6/site-packages/rknn/api/rknn.py:257: The name tf.reset_default_graph is deprecated. Please use tf.compat.v1.reset_default_graph instead.

W:tensorflow:From /home/xyz/miniconda3/envs/rknn/lib/python3.6/site-packages/rknn/api/rknn.py:257: The name tf.FIFOQueue is deprecated. Please use tf.queue.FIFOQueue instead.

W:tensorflow:From /home/xyz/miniconda3/envs/rknn/lib/python3.6/site-packages/tensorflow/python/ops/control_flow_ops.py:1814: py_func (from tensorflow.python.ops.script_ops) is deprecated and will be removed in a future version. Instructions for updating: tf.py_func is deprecated in TF V2. Instead, there are two options available in V2.

W:tensorflow:From /home/xyz/miniconda3/envs/rknn/lib/python3.6/site-packages/rknn/api/rknn.py:257: The name tf.nn.max_pool is deprecated. Please use tf.nn.max_pool2d instead.

2021-08-17 16:03:34.209042: W tensorflow/compiler/jit/mark_for_compilation_pass.cc:1412] (One-time warning): Not using XLA:CPU for cluster because envvar TF_XLA_FLAGS=--tf_xla_cpu_global_jit was not set. If you want XLA:CPU, either set that envvar, or use experimental_jit_scope to enable XLA:CPU. To confirm that XLA is active, pass --vmodule=xla_compilation_cache=1 (as a proper command-line flag, not via TF_XLA_FLAGS) or set the envvar XLA_FLAGS=--xla_hlo_profile. Exception in thread Thread-1: Traceback (most recent call last): File "rknn/base/acuitylib/provider/queue_provider.py", line 105, in rknn.base.acuitylib.provider.queue_provider.QueueProvider.run File "rknn/base/acuitylib/provider/text_provider.py", line 47, in rknn.base.acuitylib.provider.text_provider.TextProvider.get_batch ValueError: I/O operation on closed file.

During handling of the above exception, another exception occurred:

Traceback (most recent call last): File "/home/xyz/miniconda3/envs/rknn/lib/python3.6/threading.py", line 916, in _bootstrap_inner self.run() File "rknn/base/acuitylib/provider/queue_provider.py", line 99, in rknn.base.acuitylib.provider.queue_provider.QueueProvider.run File "rknn/base/acuitylib/provider/queue_provider.py", line 119, in rknn.base.acuitylib.provider.queue_provider.QueueProvider.run File "rknn/api/rknn_log.py", line 296, in rknn.api.rknn_log.RKNNLog.w TypeError: w() takes exactly 1 positional argument (2 given)

done --> Export RKNN model done

eRaul commented 3 years ago

dataset一张图片没问题,图片多久出错

done --> Building model W:tensorflow:From /home/xyz/miniconda3/envs/rknn/lib/python3.6/site-packages/rknn/api/rknn.py:257: The name tf.reset_default_graph is deprecated. Please use tf.compat.v1.reset_default_graph instead.

W:tensorflow:From /home/xyz/miniconda3/envs/rknn/lib/python3.6/site-packages/rknn/api/rknn.py:257: The name tf.FIFOQueue is deprecated. Please use tf.queue.FIFOQueue instead.

W:tensorflow:From /home/xyz/miniconda3/envs/rknn/lib/python3.6/site-packages/tensorflow/python/ops/control_flow_ops.py:1814: py_func (from tensorflow.python.ops.script_ops) is deprecated and will be removed in a future version. Instructions for updating: tf.py_func is deprecated in TF V2. Instead, there are two options available in V2.

  • tf.py_function takes a python function which manipulates tf eager tensors instead of numpy arrays. It's easy to convert a tf eager tensor to an ndarray (just call tensor.numpy()) but having access to eager tensors means tf.py_functions can use accelerators such as GPUs as well as being differentiable using a gradient tape.
  • tf.numpy_function maintains the semantics of the deprecated tf.py_func (it is not differentiable, and manipulates numpy arrays). It drops the stateful argument making all functions stateful.

W:tensorflow:From /home/xyz/miniconda3/envs/rknn/lib/python3.6/site-packages/rknn/api/rknn.py:257: The name tf.nn.max_pool is deprecated. Please use tf.nn.max_pool2d instead.

2021-08-17 16:03:34.209042: W tensorflow/compiler/jit/mark_for_compilation_pass.cc:1412] (One-time warning): Not using XLA:CPU for cluster because envvar TF_XLA_FLAGS=--tf_xla_cpu_global_jit was not set. If you want XLA:CPU, either set that envvar, or use experimental_jit_scope to enable XLA:CPU. To confirm that XLA is active, pass --vmodule=xla_compilation_cache=1 (as a proper command-line flag, not via TF_XLA_FLAGS) or set the envvar XLA_FLAGS=--xla_hlo_profile. Exception in thread Thread-1: Traceback (most recent call last): File "rknn/base/acuitylib/provider/queue_provider.py", line 105, in rknn.base.acuitylib.provider.queue_provider.QueueProvider.run File "rknn/base/acuitylib/provider/text_provider.py", line 47, in rknn.base.acuitylib.provider.text_provider.TextProvider.get_batch ValueError: I/O operation on closed file.

During handling of the above exception, another exception occurred:

Traceback (most recent call last): File "/home/xyz/miniconda3/envs/rknn/lib/python3.6/threading.py", line 916, in _bootstrap_inner self.run() File "rknn/base/acuitylib/provider/queue_provider.py", line 99, in rknn.base.acuitylib.provider.queue_provider.QueueProvider.run File "rknn/base/acuitylib/provider/queue_provider.py", line 119, in rknn.base.acuitylib.provider.queue_provider.QueueProvider.run File "rknn/api/rknn_log.py", line 296, in rknn.api.rknn_log.RKNNLog.w TypeError: w() takes exactly 1 positional argument (2 given)

done --> Export RKNN model done

谢谢反馈,这个错误将在下个版本修复。 PS: 这个错误应该不影响模型的量化和保存。如有其他问题,请提供相应信息。

bobbybrownieu commented 10 months ago

1.7.5版本还是错啊

hrlee-cubox-ai commented 8 months ago

1.7.5 still same