hotg-ai / rune

Rune provides containers to encapsulate and deploy edgeML pipelines and applications
Apache License 2.0
134 stars 15 forks source link

ops error in the NLP-BERT model even while using the Rune librunecoral version #323

Closed Mohit0928 closed 2 years ago

Mohit0928 commented 2 years ago

I tried to test the NLP mobilebert_qa.tflite model with the new Rune librunecoral version, but I'm still getting the same old ops error. Here is all the codebase for mobilebert _qa.tflite, which contains proc-blocks, model, rune, etc.

INFO  hotg_rune_cli::run::command] Running rune: bert.rune
ERROR: Didn't find op for builtin opcode 'FULLY_CONNECTED' version '9'

ERROR: Registration failed.

Error: Unable to initialize the virtual machine

Caused by:
    0: Unable to call the _manifest function
    1: RuntimeError: Unable to load the model
           at <unnamed> (<module>[58]:0x45f5)
           at <unnamed> (<module>[56]:0x3fa6)
    2: Unable to load the model
    3: Unable to initialize the model interpreter
    4: `failed to build`
Michael-F-Bryan commented 2 years ago

Looking at the librunecoral repo, it seems like we're currently pinned to tensorflow/tensorflow@919f693420e35d00c8d0a42100837ae3718f7927 (August 10th, 2021), which I assume would have the FULLY_CONNECTED opcode.

It may also be that someone has taken a normal TensorFlow model and converted it to TensorFlow Lite without checking that the TensorFlow Lite interpreter has an implementation for FULLY_CONNECTED or the conversion was made using a different version of TensorFlow Lite.

Mohit0928 commented 2 years ago

@Michael-F-Bryan, should we close this now? As the error was due to the improper installation of librunecoral repo.

Michael-F-Bryan commented 2 years ago

It sounds like this has been fixed by switching to a newer version of TensorFlow Lite (librunecoral).