Open Michael-F-Bryan opened 3 years ago
@Mohit0928 is currently stuck on this when implementing YAMNet.
Inside the Rune, variable length tensors would be like any other tensor. If a proc block requires a certain size (e.g. because they are doing FFT) then they will enforce that by using runtime assertions.
We might run into some issues with models because we insert checks to make sure the tensor being passed as a model's input/output has the right element type and shape. As far as I am aware, TensorFlow Lite models don't have any builtin mechanism to say "you can use whatever size you like for my input/output", so what happens in practice is the model will pick a random shape (e.g. 1x1) and the user writes code that will manually resize tensors to whatever they want.
When loading a model we'll need to have a way to tell the loader that the model's inputs/outputs are variable sized (e.g. using the u8[_, 256, 256, 3]
shape from above) and then update librunecoral
to include the tensor resizing code (@saidinesh5 do you know how easy/hard that would be?).
I'm attaching some pics of how a TFlite interpreter takes input
Here is the link to the Slack message.
You can also find some more details about index
used in the interpreter (see the TfLiteTensor* tensors
in TfLiteContext)
Some models (e.g. YAMNet) support variable sized inputs. TensorFlow represents this by having a 1x1 input that gets resized manually at runtime to fit the desired input.
What this will probably require:
u8[_, 256, 256, 3]
for an arbitrary number of 256x256 RGB8 images)Vec<u8>
) to hold the tensorrune build
by inserting a special$outputs
argument into each capability's arguments dictionary. That way the end implementation will know what output tensors have been requested in the Runefile