xmos / ai_tools

AI applications and tools
Other
25 stars 10 forks source link

Error converting some models with --analyze option #206

Closed keithm-xmos closed 4 years ago

keithm-xmos commented 4 years ago

Traceback (most recent call last): File "./xformer.py", line 71, in analyze.print_report(tflite_output_path) File "/home/kmoulton/repos/hotdog/ai_tools/tflite2xcore/tflite2xcore/analyze.py", line 106, in print_report tensor_arena_size = calc_arena_size(model_content) File "/home/kmoulton/repos/hotdog/ai_tools/tflite2xcore/tflite2xcore/analyze.py", line 82, in calc_arena_size [logger.info(line) for line in interpreter.get_allocations().split("\n")] File "/home/kmoulton/repos/hotdog/ai_tools/tflite2xcore/tflite2xcore/xcore_interpreter.py", line 245, in get_allocations self._verify_allocated() File "/home/kmoulton/repos/hotdog/ai_tools/tflite2xcore/tflite2xcore/xcore_interpreter.py", line 232, in _verify_allocated self.allocate_tensors() File "/home/kmoulton/repos/hotdog/ai_tools/tflite2xcore/tflite2xcore/xcore_interpreter.py", line 255, in allocate_tensors self._check_status(lib.allocate_tensors(self.obj)) File "/home/kmoulton/repos/hotdog/ai_tools/tflite2xcore/tflite2xcore/xcore_interpreter.py", line 237, in _check_status raise RuntimeError(self._error_msg.value.decode("utf-8")) RuntimeError: Internal error: AllocateFromTail can not be called between two RequestScratchBufferInArena calls. Node XC_conv2d_deep (number 49f) failed to prepare with status 1

keithm-xmos commented 4 years ago

This is caused by the builtin CONV_2D operator calling AllocatePersistentBuffer() in Prepare(). The next call to RequestScratchBufferInArena() will raise the error here:

https://github.com/tensorflow/tensorflow/blob/d5ed5f9895cc10c1ac7be0a589312414af84f4e1/tensorflow/lite/micro/micro_allocator.cc#L699

keithm-xmos commented 4 years ago

I have inquired about this being a bug or design issue in the builtin CONV2D & CONV_2D_DEPTHWISE operators. If this is not a bug or design issue, then I do have a workaround. We will need to override these builtin ops with an implementation that does not allocate persistent buffers in Prepare. This is not a horrible solution. And, it is an easy fix.

keithm-xmos commented 4 years ago

According to Pete Warden, this is a bug and they have a fix pending review. Look for it to be merged in a few days.

keithm-xmos commented 4 years ago

TFLu fix has been merged, see this commit: https://github.com/tensorflow/tensorflow/commit/59d177d9acabe8e70bc33e554a364d2620bc6999

keithm-xmos commented 4 years ago

Still have lingering TFLu bug. See: https://github.com/tensorflow/tensorflow/issues/42964

keithm-xmos commented 4 years ago

TensorFlow devs are on it. See PR: https://github.com/tensorflow/tensorflow/pull/43109

keithm-xmos commented 4 years ago

Fixed by https://github.com/xmos/ai_tools/pull/220