cpetig / tflite_micro_compiler

generate tflite micro code which bypasses the interpreter (directly calls into kernels)
Apache License 2.0
77 stars 26 forks source link

[DISCUSSION]Compiled code and interpreted code should resemble more closely #11

Closed cpetig closed 4 years ago

cpetig commented 4 years ago

Should we rename model_init() to model_AllocateTensors() and return kTfLiteOk, rename model_invoke to model_Invoke() and return kTfLiteOk/Err ? See also issue #9

cpetig commented 4 years ago

What about inputs_size() and outputs_size(), what about inputs(), tensor(i) or typed_tensor() ? I feel we don't need these, but I am not sure.

cpetig commented 4 years ago

Partially implemented by https://github.com/cpetig/tflite_micro_compiler/commit/1d56a9a81e3c26c506f939b5c59465bde9cacb0c

cpetig commented 4 years ago

@rafzi I implemented the return type change, what do you think about the renaming?

rafzi commented 4 years ago

AllocateTensors in particular sounds misleading to me since we don't allocate anything. Maybe "setup/Setup" since that is used in the tflm examples as the step before invoke?

I also think we should drop _input/output_ptr/size since they are trivially accessible through _input/output.

cpetig commented 4 years ago

I agree to remove _input/output_ptr/size. But what about inputs_size() (number of inputs) [similar for outputs] It feels natural to check against these bounds before accessing the inputs or outputs, but I was unable to find any example code doing this.

cpetig commented 4 years ago

See #39

alxhoff commented 4 years ago

I agree to remove _input/output_ptr/size. But what about inputs_size() (number of inputs) [similar for outputs] It feels natural to check against these bounds before accessing the inputs or outputs, but I was unable to find any example code doing this.

I am a fan of having these functions (number of inputs/outputs) as I prefer to not have to hardcode these things in my test code, ie. use the getters as a sanity check.

As for the naming, I believe that invoke is apt given it calls invoke, I also find init quite nice, AllocateTensors would be misleading. Maybe prepare would also be a good idea. But to be honest I don't see much difference between init and prepare.