Closed n-moussa closed 1 year ago
Hi @n-moussa
Could you please share the model with us?
Hi @n-moussa
Can you try with this patch?
https://review.mlplatform.org/c/ml/armnn/+/7730
That should get you past the segfault, let us know if you encounter any other issues.
Best regards, Mike
Hi @n-moussa
Can you try with this patch?
https://review.mlplatform.org/c/ml/armnn/+/7730
That should get you past the segfault, let us know if you encounter any other issues.
Best regards, Mike
Thanks for the patch, it fixes the segfault.
I now have the following exception error when using the optimizer for the default backend CpuRef:
terminate called after throwing an instance of 'armnn::LayerValidationException'
what(): FullyConnectedLayer: TensorShape set on OutputSlot[0] does not match the inferred shape. : [1,1,48] != [1,48]
Aborted (core dumped)
With more details:
#6 0x00007ffff72690c1 in armnn::ConditionalThrowIfNotEqual<armnn::LayerValidationException, armnn::TensorShape> (
message=Python Exception <class 'gdb.error'> There is no member named _M_dataplus.:
, leftHandSide=..., rightHandSide=...)
at armnn-22.05/include/armnn/Exceptions.hpp:197
#7 0x00007ffff7345c6c in armnn::Layer::ValidateAndCopyShape (this=0x691f10, outputShape=..., inferredShape=...,
shapeInferenceMethod=armnn::ShapeInferenceMethod::ValidateOnly, Python Exception <class 'gdb.error'> There is no member named _M_dataplus.:
layerName=, outputSlotIndex=0)
The code snippet used :
armnnTfLiteParser::ITfLiteParserPtr tflite_parser = armnnTfLiteParser::ITfLiteParser::Create();
armnn::INetworkPtr ml_model = tflite_parser->CreateNetworkFromBinaryFile(model_name.c_str());
/// Create the runtime
armnn::IRuntime::CreationOptions options;
armnn::IRuntimePtr runtime(armnn::IRuntime::Create(options));
/// Create the optimizer
std::vector<armnn::BackendId> default_backend = {armnn::Compute::CpuRef};
armnn::IOptimizedNetworkPtr optimizedNet = armnn::Optimize(
*ml_model,
default_backend,
runtime->GetDeviceSpec()
);
I uploaded the model in a shared drive, following the EULA, hope you get access to it.
Hi @n-moussa
Can you try creating your TfLiteParser like this:
armnnTfLiteParser::ITfLiteParser::TfLiteParserOptions options;
options.m_AllowExpandedDims = true;
armnnTfLiteParser::ITfLiteParserPtr tflite_parser(ITfLiteParser::Create(armnn::Optional<ITfLiteParser::TfLiteParserOptions>(options)));
That may be enough to solve your problem.
I'm coming back on this, sorry for the delay, I was able to run the model under the armnn runtime, however, the outputs doesn't match with tflite runtime.
If I fed the model with the same input (10 arrays of zeros). Here are the outputs from the 2 runtimes :
[0.02496445 0.00159931 0.00135207 0.01229486 0.03196827 0.04249388
0.05009481 0.05039027 0.05352601 0.05435741]
[0.19643204 0.19643204 0.19643204 0.19643204 0.19643204 0.19643204
0.19643204 0.19643204 0.19643204 0.19643204]
I can see 2 issues, the first output should be exactly the same as it's the same model with the same input data. Then, armnn runtime seems to not maintain the internal states of the LSTM, this is why it gives the exact same results for the exact same inputs. Tflite runtime is by default maintaining the internal states of the LSTM from one prediction to another.
Is there an option with the armnn runtime to maintain the internal state of the LSTM ?
Thank you
Hi team, I'm running into a segmentation fault when trying to parse a tflite model which uses the unidirectional sequence LSTM.
Here is the code snippet that I tried to run:
I'm testing it on x86-64 and here is the build command used with TF version 2.5.0 and ArmNN 22.05:
Here is the segfault log in debug mode:
Any help would be appreciate, thanks!