Open jackylu0124 opened 2 years ago
Hi, is there any update on this by any chance? Thanks!
Hi, is there any update on this by any chance? Thanks!
ORT for react native requires an absolute file path of either unix path format or a file:// scheme, for InferenceSession.create()
.
Please refer to https://github.com/fs-eire/ort-rn-hello-world for an example. In this example I use expo-asset
package to deal with the assets and passes the file URI to ORT to load the model.
Please use onnxruntime-react-native@1.12.0-dev.20220517-4dd3cc40c for now as it includes several bugfixes since 1.11.
I see thank you very much for the clarification and update! On a side note, could you please give some insights on the execution provider of onnx-react-native on mobile platforms (iOS/Android) when it's not using CoreML or NNAPI? For example, is it single -threaded/multi-threaded CPU code? Thanks again!
most of the configs (including threads) are available via SessionOptions.
Their implementation are:
CoreML and NNAPI is not supported yet
ORT for react native requires an absolute file path of either unix path format or a file:// scheme, for
InferenceSession.create()
.Please refer to https://github.com/fs-eire/ort-rn-hello-world for an example. In this example I use
expo-asset
package to deal with the assets and passes the file URI to ORT to load the model.Please use onnxruntime-react-native@1.12.0-dev.20220517-4dd3cc40c for now as it includes several bugfixes since 1.11.
I will check it out and run it on my onePlus7 this weekend. In addition, I hope there will be a notice on memory usage. Very often that Android device only has very small memory usage. Things like Pytorch mobile, they have their own way to handle that issue. I am not sure about ONNX.
expo-asset
@fs-eire @JonathanSum
I went over the code inside https://github.com/fs-eire/ort-rn-hello-world, but unfortunately my project doesn't use expo. I tried to use RNFS, in particular RNFS.DocumentDirectoryPath
, but I couldn't find my onnx model file under all sub-directories of RNFS.DocumentDirectoryPath
, which evaluates to /data/user/0/com.onnxtestbed/files
. Do you by chance know what I did wrong here or have any suggestions on other ways to accomplish the same thing as expo? Thanks!
If you are not using expo-asset, you need to manually add the model file into your projects (android and ios) as asset (see this link for asset resolve of expo-asset). Specifically, if you are running debug build for android, the model file is served by react native dev server - you need an extra step to "download" the model and save it to a temp user folder and load the model from there.
In my own project, I just put a file in the src folder and import it. I do not need to download it from certain urls. I was trying to use a Transformer model.
If you are not using expo-asset, you need to manually add the model file into your projects (android and ios) as asset (see this link for asset resolve of expo-asset). Specifically, if you are running debug build for android, the model file is served by react native dev server - you need an extra step to "download" the model and save it to a temp user folder and load the model from there.
@fs-eire I see, thanks for the suggestion and the insight, I will give it a try.
In my own project, I just put a file in the src folder and import it. I do not need to download it from certain urls. I was trying to use a Transformer model.
Could you please explain what you mean by "import it"? Do you mind sharing the lines of code that you used to import the model file? Also are you using Expo or just plain React Native? Thanks for the help in advance!
There is other solution. For that one, I just did a const MODEL_URL = require('./modelfile.file_extention'); to load the model file. This solution is not related to this onnx solution. I see they just use some setting to allow the RN to load the model. Remeber the ONNX RN doc said you can't load it as the javascript? But that solution "may" not be stable because I feel it is stable after a few improvment. I applied that solution for my project in NLP question and answering for a start up by that solution. You can see the working solution in my youtube channel.
But what if we can apply that to ONNX...does it mean we can do it? I think you can see what it is. But it may not work for your case because I don't even know what model you are trying to use.
There is other solution. For that one, I just did a const MODEL_URL = require('./modelfile.file_extention'); to load the model file. This solution is not related to this onnx solution. I see they just use some setting to allow the RN to load the model. Remeber the ONNX RN doc said you can't load it as the javascript? But that solution "may" not be stable because I feel it is stable after a few improvment. I applied that solution for my project in NLP question and answering for a start up by that solution. You can see the working solution in my youtube channel.
But what if we can apply that to ONNX...does it mean we can do it? I think you can see what it is. But it may not work for your case because I don't even know what model you are trying to use.
Thanks for you insight and suggestions! And after const MODEL_URL = require('./modelfile.file_extention');
, do you simply do const session: ort.InferenceSession = await ort.InferenceSession.create(MODEL_URL);
? My model is nothing fancy, it's literally a linear blend operation (see picture below) and has no weights at all. You can also find the .ort
model in the link to the repo above if you want to give it a try. Also just to make sure we are on the same page, are you using plain react native and not expo and testing it on physical Android phone? Thanks for the help again!
The code example of const MODEL_URL = require('./modelfile.file_extention');
by @JonathanSum - the value of variable MODEL_URL should be set to the "real" model path (either a unix-style file path or a file:// scheme URI) that accessible by the native module.
However the behavior of require()
may be different for different project configs, according to bundler or other build time toolchain. For example, in an expo managed app, the return value of require('./assets/my_model.ort')
is a number (integer as asset ID) and it can be passed to some other functions to get the real path of the file (when using the default expo settings, using metro bundler and expo-asset to resolve asset file path).
As long as you figured out how to validate the file path of the model and pass it to InferenceSession.create()
it should work as expected.
The code example of
const MODEL_URL = require('./modelfile.file_extention');
by @JonathanSum - the value of variable MODEL_URL should be set to the "real" model path (either a unix-style file path or a file:// scheme URI) that accessible by the native module.However the behavior of
require()
may be different for different project configs, according to bundler or other build time toolchain. For example, in an expo managed app, the return value ofrequire('./assets/my_model.ort')
is a number (integer as asset ID) and it can be passed to some other functions to get the real path of the file (when using the default expo settings, using metro bundler and expo-asset to resolve asset file path).As long as you figured out how to validate the file path of the model and pass it to
InferenceSession.create()
it should work as expected.
@fs-eire
Thank you so much for your insights and advice! I am using metro and did see a number returned by require()
like you said. However, I couldn't find helper methods from metro to extract the full file path given an asset ID number. On the other hand, earlier I thought Image.resolveAssetSource()
can only be used with image asset, but it looks like I can use it to get the link to my ort
model file as well; in other words, Image.resolveAssetSource()
works with non-image assets as well, is my understanding correct?
I also tried the approach you suggested and downloaded the model file onto the temporary folder using RNFS
. And even though I can see the model file in the temporary directory when I print out all the content of the temporary directory, the model loading still failed when I passed in the file path of the downloaded model file in the temporary directory. The error message is Error Message: [Error: Can't load a model: No content provider: /data/user/0/com.onnxtestbed/cache/LinearBlend_v001.ort]
. I am running my bare react-native project (no expo) using android studio and testing it on my physical android device. You can find the code to reproduce the error here https://github.com/jackylu0124/ort-react-native-issue (I also realized that in my very original post on the very top, the link to the repo is broken but they are fixed now).
I would really appreciate it if you could please let me know what I did wrong in my code, and thank you so much for your help again!
could you help to try file:///data/user/0/com.onnxtestbed/cache/LinearBlend_v001.ort
instead of /data/user/0/com.onnxtestbed/cache/LinearBlend_v001.ort
?
could you help to try
file:///data/user/0/com.onnxtestbed/cache/LinearBlend_v001.ort
instead of/data/user/0/com.onnxtestbed/cache/LinearBlend_v001.ort
?
@fs-eire
Sorry for the very late reply, was a bit busy last week, but thank you very much for the suggestion! It works on Android now! However, it still doesn't work on iOS. On iOS it gives the following error when I used the same approach above and with file://
prepended, even though I am able to see the model file (path to the model file is /private/var/mobile/Containers/Data/Application/E3618C4F-9592-4716-AF0D-80BD2A7FC650/tmp/LinearBlend_v001.ort
) downloaded into the temporary directory. Do you by chance know how the session creation function is supposed to be used on iOS? Thanks a lot for the help again!
Error Message"
2022-06-13 23:03:55.012049-0400 ONNXModelTestBed[11750:948043] [javascript] 'Error Message:', [Error: Can't load a model: null is not an object (evaluating '(0, _classPrivateFieldLooseBase2.default)(this, _inferenceSession)[_inferenceSession].loadModel')]
could you help to try
file:///data/user/0/com.onnxtestbed/cache/LinearBlend_v001.ort
instead of/data/user/0/com.onnxtestbed/cache/LinearBlend_v001.ort
?@fs-eire Sorry for the very late reply, was a bit busy last week, but thank you very much for the suggestion! It works on Android now! However, it still doesn't work on iOS. On iOS it gives the following error when I used the same approach above and with
file://
prepended, even though I am able to see the model file (path to the model file is/private/var/mobile/Containers/Data/Application/E3618C4F-9592-4716-AF0D-80BD2A7FC650/tmp/LinearBlend_v001.ort
) downloaded into the temporary directory. Do you by chance know how the session creation function is supposed to be used on iOS? Thanks a lot for the help again!Error Message"
2022-06-13 23:03:55.012049-0400 ONNXModelTestBed[11750:948043] [javascript] 'Error Message:', [Error: Can't load a model: null is not an object (evaluating '(0, _classPrivateFieldLooseBase2.default)(this, _inferenceSession)[_inferenceSession].loadModel')]
This null is not an object
error message is probably a little bit confusing but it actually means "this._inferenceSession
is null". It is caused by the native module is not loaded as expected. This is usually caused by not setting up onnxruntime-react-native
correctly in the ios/Podfile.
could you help to try
file:///data/user/0/com.onnxtestbed/cache/LinearBlend_v001.ort
instead of/data/user/0/com.onnxtestbed/cache/LinearBlend_v001.ort
?@fs-eire Sorry for the very late reply, was a bit busy last week, but thank you very much for the suggestion! It works on Android now! However, it still doesn't work on iOS. On iOS it gives the following error when I used the same approach above and with
file://
prepended, even though I am able to see the model file (path to the model file is/private/var/mobile/Containers/Data/Application/E3618C4F-9592-4716-AF0D-80BD2A7FC650/tmp/LinearBlend_v001.ort
) downloaded into the temporary directory. Do you by chance know how the session creation function is supposed to be used on iOS? Thanks a lot for the help again! Error Message"2022-06-13 23:03:55.012049-0400 ONNXModelTestBed[11750:948043] [javascript] 'Error Message:', [Error: Can't load a model: null is not an object (evaluating '(0, _classPrivateFieldLooseBase2.default)(this, _inferenceSession)[_inferenceSession].loadModel')]
This
null is not an object
error message is probably a little bit confusing but it actually means "this._inferenceSession
is null". It is caused by the native module is not loaded as expected. This is usually caused by not setting uponnxruntime-react-native
correctly in the ios/Podfile.
Thank you very much for your insight! I am using the automatically generated ios
project folder inside the bare React Native project generated using the command npx react-native init MyProjectName --template react-native-template-typescript
. I also inspected the pod file and made sure to do npx pod-install
before I build and run the project, but the error still persists. You can follow the following steps to reproduce the issue, I would really appreciate it if you have any more insights or could spot any mistakes during my setup:
file://
fix added and I have confirmed that it now works with Android)cd
into the ONNXTestbed
foldernpm install
commandnpm pod-install
commandcd
into the ios
foldernpx react-native start
commandONNXTestbed.xcworkspace
file inside the ios
folder using Xcode and run the projectYou should then be able to see the content of the temporary folder (including the model that has just been downloaded into the temporary directory) as well as the error message in the terminal.
Thank you so much for your help and time again! I really appreciate it.
rahulnainwal107, if your issue is only the loading issue, you can check out here: https://github.com/microsoft/onnxruntime/issues/10990. I solved it for my case.
I am getting this error only in ios and not sure where is solution there in suggested issue.
Then the suggested issue can not help you. Good Luck.
Ok, Thanks.
This issue (null is not an object
) probably be the native module is not being built correctly for iOS during a expo
command. @rahulnainwal107 @jackylu0124 could you try the step4. "setup manually" section from README to make sure the following line is added to Podfile:
pod 'onnxruntime-react-native', :path => '../node_modules/onnxruntime-react-native'
Thanks @fs-eire, This worked for me but along with this also need to updated platform version inside podfile and deployment target using Xcode to 12.4.
This issue (
null is not an object
) probably be the native module is not being built correctly for iOS during aexpo
command. @rahulnainwal107 @jackylu0124 could you try the step4. "setup manually" section from README to make sure the following line is added to Podfile:pod 'onnxruntime-react-native', :path => '../node_modules/onnxruntime-react-native'
Hi @fs-eire,
Thank you very much for the update and suggestion and sorry for the late reply! I will give this a try in a bit and let you know how it goes. Thanks again!
Has anyone managed to solve the following error: Error: Can't load a model: No content provider
I have the error on both android and ios
I tried to fix this issue with my own way, but this issue should be fixed in official package.
Maybe someone managed to solve this problem ? I still can't load my model.
Maybe someone managed to solve this problem ? I still can't load my model.
Are you getting the exact same issue and did you try https://github.com/microsoft/onnxruntime/issues/11239#issuecomment-1261955895?
Describe the bug
I tried to load a very simple .ort model (attached and also in the repo linked below) into my React Native app after converting it from .onnx but it gave the error
[Error: Can't load a model: No content provider: ./onnx_models/LinearBlend_v001.ort]
when I tried to load it with the following line:const session: ort.InferenceSession = await ort.InferenceSession.create("./onnx_models/LinearBlend_v001.ort");
.I am currently running the app on Android Studio's emulator and I have also verified that my model conforms to ORT Mobile's operator and data type requirements.
Urgency
High urgency, this is a blocking issue in my project.
System information
"onnxruntime-react-native": "^1.11.0"
(in package.json)To Reproduce
Link to repo with minimal code for error reproduction: https://github.com/jackylu0124/ort-react-native-issue Please follow instructions on https://reactnative.dev/docs/environment-setup in order to setup and run the project with Android Studio's emulator.
Once the project is setup and running, you can click/press the "START INFERENCE" button on the top of the screen, which will try to load the model, and the error will be logged into the console. (see Gif below)
Expected behavior
The model should be able to be loaded into the program.
Screenshots
The .ort Model that I am using (it's a very simple model for linear-blending two inputs e.g.
blend = (1 - alpha) * img1 + alpha * img2
)Error Message Gif