Closed preusser closed 2 years ago
The armnn::QuantizedLstmLayer expects the batch dimension to come first asking for inputs of BATCH_SIZE x INPUT_SIZE and BATCH_SIZE x HIDDEN_SIZE as well as corresponding outputs: https://github.com/ARM-software/armnn/blob/branches/armnn_21_11/src/armnn/layers/QuantizedLstmLayer.cpp#L81
armnn::QuantizedLstmLayer
BATCH_SIZE x INPUT_SIZE
BATCH_SIZE x HIDDEN_SIZE
The corresponding accelerated implementation by the Arm ComputeLibrary NEON backend expects the opposite order: https://github.com/ARM-software/ComputeLibrary/blob/master/src/runtime/NEON/functions/NELSTMLayerQuantized.cpp#L259
It is currently impossible to pass the shape validations in both Arm NN and the ComputeLibrary unless proper cubes are used as tensors.
I can no longer reproduce the issue. An appropriate transpose appears to be added in the translation process. Sorry for the disturbance.
The
armnn::QuantizedLstmLayer
expects the batch dimension to come first asking for inputs ofBATCH_SIZE x INPUT_SIZE
andBATCH_SIZE x HIDDEN_SIZE
as well as corresponding outputs: https://github.com/ARM-software/armnn/blob/branches/armnn_21_11/src/armnn/layers/QuantizedLstmLayer.cpp#L81The corresponding accelerated implementation by the Arm ComputeLibrary NEON backend expects the opposite order: https://github.com/ARM-software/ComputeLibrary/blob/master/src/runtime/NEON/functions/NELSTMLayerQuantized.cpp#L259
It is currently impossible to pass the shape validations in both Arm NN and the ComputeLibrary unless proper cubes are used as tensors.