Closed kqz8866 closed 1 year ago
@kqz8866, the error that you are seeing is because the model uses CUDA backend for some tensors. For this to work in PlayTorch, you'll have to make sure the model weights are loaded in cpu and stay on cpu.
Do you have a public notebook that shows how you loaded the model and exported it as a lite interpreter model?
For context, PlayTorch uses the OSS PyTorch Mobile lite interpreter runtime to load models and run inference. By default, the lite interpreter runtime only includes cpu ops (backends like Vulkan or Metal aren't included).
Thanks for your reply. I have made sure that all the model and weights are loaded on cpu. Here is the notebook that contains how I generated the pytorch lite model. I tried both save from the loaded model or build a model and loaded the state_dict, but unforetunately both did not work. The Clip package is the official pytorch implementation of OpenAI. Thank for looking into it.
I solved it. Making every operation on cpu worked.
Version
0.2.4
Problem Area
react-native-pytorch-core (core package)
Steps to Reproduce
I was tring to observe the difference in performances among various vision models. I followed the Image Classification tutorial except that I use some custom models. The only change I made is in line 50 of ImageClassifer.js
const filePath = await MobileModel.download(require('./models/mobile_p.ptl'));
instead of downloading a model from a URL. For some custom models the process was successful, but for this one it shows the error as described in the title. This problem model is the pretrained Vision Transformer of CLIP. The complete error message is as below. It suggests one of the operation in my model is not supported with CUDA. There might be possible solution for Facebook employees, but I am not.Things I have tried include:
clip.load(..., jit=True)
clip.loap
and loading the state_dict of the pretrained model.Since I can run my other models except for this one, I assume the problem is the way the model is constructed in python? Or if there is a quick fix in the process of converting the pytorch model to ptl?
Here is the entire error message:
Expected Results
The argmax of the output tensors.
Code example, screenshot, or link to repository
The problem model: https://drive.google.com/file/d/1FA_6YDDkFNYEfAQqitaeDNPfuCKLTD-c/view?usp=share_link CLIP: https://github.com/openai/CLIP/tree/main/clip