Closed padoremu closed 1 year ago
Any updates on this? I'm trying to use an lstm on a microcontroller and keep getting the error (only 1 subgraph supported). Is there any way to get an rnn onto the micro?
Looping in @petewarden
Hi apols to ask again but wondering if 'RNN support for Tensorflow Lite Micro' has had any development?
Hi, I had similar issue. Is there any updates?
Not sure if this is what you need, but I have figured out a workaround here: https://github.com/da03/TFLite-Micro-Seq2Seq. It's still a hacky solution in two aspects: first, the embedding layer is implemented in C directly; second, I'm dumping only a single step of the LSTM and doing the for loop in C since subgraphs of size more than 1 is not supported.
Hi, any updates on this? Thank you.
Any updates? I have a similar issue. I tried building a univariate time series forecasting system with LSTMs that run on a micro-controller. The model is build and compiled with Keras. The Tf Lite version is 2.4.1. The model works fine and compiles, but when I try to allocate tensors on the Arduino IDE it says that only one subgraph is currently supported on the serial monitor.
TFLM now supports many of the features (multiple subgraphs, additional OPs) that were blockers when this issue was first created. Please feel free to add any remaining missing features to this issue.
TFLM now supports many of the features (multiple subgraphs, additional OPs) that were blockers when this issue was first created. Please feel free to add any remaining missing features to this issue.
Hi, I still get the subgraph error. Are RNNs supposed to be already supported now?
If anyone would like to provide a lstm for the new ESP32-S3 as think xtensa or is it cadence with the LX series have kept that layer behind a paywall. For me on micro LSTM or GRU was a cul-de-sac for KWS which have some great LSTM/GRU models fave being CRNN as its lite compared to others for very similar accuracy.
Or do I have it wrong and the code does port to ESP32 specifically S3 as that has the vector instructions to make it much more model capable?
This particular issue is talking about RNN with GRU cells. The unimplemented OPs and multiple subgraph support are now part of the TFLM tree. My preference would be to limit this issue for issues related to GRU models.
Please create a separate issue for LSTM support - we are working on that and you can likely expect something in the coming months.
TFLM now supports many of the features (multiple subgraphs, additional OPs) that were blockers when this issue was first created. Please feel free to add any remaining missing features to this issue.
Looks like even though there is LSTM support now in TFLM, the TensorListFromTensor
, TensorListReserve
, and TensorListStack
ops to support RaggedTensors
are still not supported.
Is there any way to implement these on my own, or is there a specific reason (most MCUs missing instructions to perform them, etc) why they are not included with TFLM?
Fixing the input dimensions with a signature and concrete function gets around this issue for me, but it'd be nice to not need to fix the input size
"This issue is being marked as stale due to inactivity. Remove label or comment to prevent closure in 5 days."
"This issue is being closed because it has been marked as stale for 5 days with no further activity."
It seems that GRU models are still not supported. So this is still relevant?
Is there a support or tf.keras.layers.GRU in TFlite Micro, the same way it was done for UNIDIRECTIONAL_SEQUENCE_LSTM in 2022? Specifically, how to get a single GRU layer in TFLite model instead of unrolled GRU with multiple FullyConected, Mul, SPLIT, and Subs. Attaching a link to an article talking about LSMT support in TFLM.
System information
Describe the feature and the current behavior/state. Support of RNNs is currently missing in Tensorflow Lite Micro. I've been testing with an RNN with GRU cells. Simple code (from here):
The current situation is (missing ops):
experimental_new_converter=True, unroll=True
: builtin ops SHAPE, TRANSPOSE, FILL, SPLIT_V, SUB, TANHexperimental_new_converter=True, unroll=False
: 3 subgraphs (only 1 subgraph supported)experimental_new_converter=False, unroll=True
: builtin ops SPLIT_V, SUB, TANHexperimental_new_converter=False, unroll=False
: custom ops TensorListFromTensor, TensorListReserve, TensorListStack, WhileMy questions are:
experimental_new_converter=False, unroll=False
) to convert the RNN just as a single (unsupported) placeholder RNN op rather than splitting it up into the four (unsupported) operators TensorListFromTensor, TensorListReserve, TensorListStack, While?Thank you.
Will this change the current api? How? No
Who will benefit with this feature? Everybody who needs RNNs with Tensorflow Lite Micro.
Any Other info. None