Closed nh9k closed 2 years ago
Hi @nh9k, unfortunately there is no quick workaround. The PyTorch Mobile shared object library is not linked against LAPACK. It would require the PyTorch team to add an options to compile the PyTorch Mobile libraries with LAPACK for Android/iOS.
As suggested, you could post in the PyTorch repo and ask for support. If it's added to the PyTorch Mobile shared object libraries, we can pull in the new libraries into PlayTorch!
Closing the issue as it can't be resolved without PyTorch Mobile being compiled with LAPACK support. Feel free to reopen if this changes
Thanks @raedle, If I solve this problem from asking the PyTorch team for LAPACK, I report back!
@raedle the posted issue also talks about XNNPACK but that should not be the issue. Are you not building with USE_XNNPACK=1 for pytorch?
@kimishpatel, the PlayTorch API doesn't build PyTorch Mobile from source but uses the following PyTorch Mobile build artifacts
Are the released PyTorch Mobile build artifacts build with USE_XNNPACK=1
?
I am not entirely sure. Let me get back to you on this. Also is this on ios or android?
Thanks @kimishpatel!
@nh9k, is the issue with XNNPACK on both platforms Android and iOS or just on one of the two platforms?
Thanks @kimishpatel @raedle! The issue is occured on Android. I haven't tested it on IOS.
Yeah for android it should be on by default.
Yeah as @kimishpatel said, xnnpack should be there.
FWIW, just to validate, I looked at the build artifact pytorch_android_lite-1.12.2.aar
from maven.org, and looked for xnnpack symbols in the libpytorch_jni_lite.so
, I did see them.
@raedle, so, can i solve this problem?
How can i build react-native app with XNNPACK?
Writing dependencies implementation "org.pytorch:pytorch_android_lite:1.12.2"
at build.gradle
file can solve this problem?
@nh9k, if symbols are part of org.pytorch:pytorch_android_lite:1.12.2
, then you shouldn't need to do anything. The latest react-native-pytorch-core
release v0.2.1
uses libpytorch_jni_lite.so
from org.pytorch:pytorch_android_lite:1.12.2
.
What is the react-native-pytorch-core
version that you tested?
Can you share a simple model + python export code for us to test as well?
@raedle, Thank you so much!
My react-native-pytorch-core
version is v0.0.0-08082022-2231-889b3951d
.
I have another problem with the inference of quantized model, which is probably the same problem.
Thank you so much again, I will change pytorch core version to v0.2.1
and report back!
@nh9k, are you using torch.utils.mobile_optimizer.optimize_for_mobile
on the model that throws the XNNPACK error (see below)?
XNNPACK Convolution not usable! Reason: The provided input tensor is either invalid or unsupported by XNNPACK
If that's the case, can you try adding optimization_blocklist
for INSERT_FOLD_PREPACK_OPS
for the optimize_for_mobile
function?
optimization_blocklist = {
MobileOptimizerType.INSERT_FOLD_PREPACK_OPS,
}
optimized_model = torch.utils.mobile_optimizer.optimize_for_mobile(model, optimization_blocklist)
Insert and Fold prepacked ops (blocklisting option MobileOptimizerType::INSERT_FOLD_PREPACK_OPS): This optimization pass rewrites the graph to replace 2D convolutions and linear ops with their prepacked counterparts. Prepacked ops are stateful ops in that, they require some state to be created, such as weight prepacking and use this state, i.e. prepacked weights, during op execution. XNNPACK is one such backend that provides prepacked ops, with kernels optimized for mobile platforms (such as ARM CPUs). Prepacking of weight enables efficient memory access and thus faster kernel execution. At the moment optimize_for_mobile pass rewrites the graph to replace Conv2D/Linear with 1) op that pre-packs weight for XNNPACK conv2d/linear ops and 2) op that takes pre-packed weight and activation as input and generates output activations. Since 1 needs to be done only once, we fold the weight pre-packing such that it is done only once at model load time. This pass of the optimize_for_mobile does 1 and 2 and then folds, i.e. removes, weight pre-packing ops.
More details: https://pytorch.org/docs/stable/mobile_optimizer.html
@raedle, Thank you so much!!
I have the same error (xnnpack) chaning react-native-pytorch-core
version to v0.2.1
.
The code with from torch._C import MobileOptimizerType
you recommended got a new error,
{"message": "expected scalar type Byte but found Float
Debug info for handle(s): debug_handles:{-1}, was not found.
Exception raised from data_ptr at aten/src/ATen/core/TensorMethods.cpp:20 (most recent call first):
(no backtrace available)"}
I don't know yet why it is appear, it is probably my fault. I don't test long time from now on, about a week, i would like to share my projects but i have no enough time for organizing my code. Sorry about this, If you would like to know the model code, My model is related about this repository. Thank you so much again, I will back next week..
@raedle can you try this https://pytorch.org/mobile/android/? But you pytorch_lite:1.12.2 that you are using. I want to see if in the app also we get the same issue.
Other question for @nh9k is: Have you tried running the same model using pytorch release say from via pip install?
@kimishpatel yes, converted .ptl
model works fine from python code!
@nh9k, can you share the model and the code used to export the model for the lite interpreter runtime?
@raedle, sorry i am late..
Can i share you my project using my private repository?
And my final aim is exporting quantization model with react-native app.
But it also has a problem. so can you test with quantization model?
If possible, i will share this quantized model .pt
file(before making .ptl file) from my google drive.
The quantized model works well from python code.
@nh9k, instead of sharing the private repo, can you please create a reproducible example publicly? This way, the community can also benefit if anyone has a similar issue :)
Alright! The code used to export the model is here.
CODE:
import torch
from torch.utils.mobile_optimizer import optimize_for_mobile
model = torch.jit.load('model_scripted.pt')
model.eval()
device = torch.device('cpu')
x = torch.randn(1, 3, 768, 768).to(device)
# Use torch.jit.trace to generate a torch.jit.ScriptModule via tracing.
traced_script_module = torch.jit.trace(model, x)
from torch._C import MobileOptimizerType
optimization_blocklist = {
MobileOptimizerType.INSERT_FOLD_PREPACK_OPS,
}
optimized_scripted_module = optimize_for_mobile(traced_script_module, optimization_blocklist)
optimized_scripted_module._save_for_lite_interpreter("model.ptl")
The model .pt file is here(my google drive)
Thank you so much, raedle.
@nh9k, I was somewhat successful. The exported model loads on Android in the PlayTorch app, and it can run inference with a random tensor as input. There is an issue on iOS that I need to look into (i.e., iOS crashes with this model).
I used the export similar to what you provided. The only change is that the module is already a ScriptModule
and doesn't need to be traced but can be used directly. I exported the model with and without optimization_blocklist
.
import torch
from torch.utils.mobile_optimizer import optimize_for_mobile
model = torch.jit.load('model_dl.pt')
model.eval()
device = torch.device('cpu')
x = torch.randn(1, 3, 768, 768).to(device)
from torch._C import MobileOptimizerType
optimization_blocklist = {
MobileOptimizerType.INSERT_FOLD_PREPACK_OPS,
}
optimized_scripted_module = optimize_for_mobile(model)
optimized_scripted_module_with_blocklist = optimize_for_mobile(model, optimization_blocklist)
optimized_scripted_module._save_for_lite_interpreter("optimized_scripted_module.ptl")
optimized_scripted_module_with_blocklist._save_for_lite_interpreter("optimized_scripted_module_with_blocklist.ptl")
In python, I then reloaded the lite interpreter model and ran inference with a random tensor. It outputs a tuple with two tensors (assuming the tensors are in correct shape).
from torch.jit.mobile import _load_for_lite_interpreter
model2 =_load_for_lite_interpreter("optimized_scripted_module_with_blocklist.ptl")
with torch.no_grad():
a, b = model2(torch.randn(1, 3, 768, 768))
print("a.shape", a.shape)
print("b.shape", b.shape)
I also logged the export_opnames
for the input model, the optimized model, and the optimized model with the blocklist to show the ops.
Example:
torch.jit.export_opnames(optimized_scripted_module_with_blocklist)
Output:
['aten::cat',
'aten::conv2d',
'aten::max_pool2d',
'aten::permute',
'aten::relu_',
'aten::size.int',
'aten::upsample_bilinear2d']
Colab notebook with code from above: https://colab.research.google.com/drive/1JzjL7RZd4_ldgoc-7cJ02RUl53sIMhjr
async function main() {
try {
console.log('loading model');
const filePath = await MobileModel.download(
'https://example.com/path/to/optimized_scripted_module_with_blocklist.ptl'
);
// or loading the model as project asset
//const filePath = await MobileModel.download(
// require('./path/to/optimized_scripted_module_with_blocklist.ptl')
//);
const model = await torch.jit._loadForMobile(filePath);
const output = await model.forward(torch.randn([1, 3, 768, 768]));
console.log('output value', output);
} catch (error) {
console.error(error);
}
}
main();
loading model
output value [{"dtype":"float32","shape":[1,384,384,2]},{"dtype":"float32","shape":[1,32,384,384]}]
@raedle, thank you so much!!, i am successful too. I'm not sure why the previous model didn't work. Thank you so much!!
@raedle, I haven't tested it on iOS yet. What issues was there?
Hi, @raedle,
I have a problem, would you help me?
I expected a float32 tensor output(e.g., 0. 0.01859372 0. ... 0.), but I got the output data 0 or 1 of integer when i printed the outputTensor.data()
using console.log
function. It is very strange.
@raedle, I figure out the blocklist was problem. When i removed this blocklist argument optimization_blocklist
, the model output works fine as float output on react-native app. Thanks a lot for your help! I need to study more about this blocklist.
Area Select
react-native-pytorch-core (core package)
Description
Hello! thanks for contributions!
I have a problem while developing my project. I need a function like torchvision.transforms.functional.perspective
Could you add this implementation for
torchvision.transforms.functional.perspective
? or Can i implement this function? There is no implementation of perspective function in playtorch docsAnother solution that i proceed is making pytorch mobile model for this function. This idea came from @raedle of this issue. but it has a error at react-native app like this:
Should i question to pytorch github about this error?
My perspective model is like this: This model is successful at python code.
How can i solve this problem? Many many thanks for anyone help!