Open yunjayh opened 2 years ago
Note that the latest version of tfjs-tflite API is 0.0.1-alpha.8 and seems that it's at pretty early stage.
I'd prefer No.1 because the toolchain is already installed and runtimes for tf and onnx are there.
And No.2 could issue versions of each toolchain's and npm module's ones.
Note that the latest version of tfjs-tflite API is 0.0.1-alpha.8 and seems that it's at pretty early stage.
And No.2 could issue versions of each toolchain's and npm module's ones.
Yes, as you mentioned, the versions could be an issue. I'll try to use python script. Thanks for your opinions.
Use python script
Just for sure, does this mean running NN model in a machine in which toolchain is installed? :)
Yes, AFAIK ONE
package uses python venv to run the python scripts. I'm thinking of using the virtual environment for running NN models. Would there be any potential problems?
Nop, I just wondered the location of running a NN model :)
I thought we would use docker on linux for ONE toolchain, so would it be the same for tflite too. So the option 1. :-D
WHAT
Let's consider how to run the reference models in our project
WHY
To test compiled models' accuracy, it needs to run both reference model (e.g. tflite, onnx, etc.) and the target model (e.g. circle and NPU). There are two (or more if there's any) ways to run the reference models.
tfjs
, andonnxruntime
I'd thought to use the second way without considering the first at all, because our project is typescript project. But if the first one is acceptable to use in our project, it could be simpler to implement. Since the execution environment will contains
ONE
package, which includestensorflow
andonnx
interpreters, it doesn't need to install another packages to run the reference models.Please give me any opinions if you have any. /cc @Samsung/one-vscode