Closed DBraun closed 4 years ago
Hi! I use U-2-Net on Android, which internally uses C++ and I managed to make it work. With trace I couldn't make it work, instead I used the following mode of converting to TorchScript:
net.load_state_dict(torch.load(model_dir, map_location=torch.device('cpu')))
`
if torch.cuda.is_available():
net.cuda()
scripted = torch.jit.script(net)
torch.jit.save(scripted, "fod.p")`
Hope this helps.
@wpmed92 Thanks for your suggestion. I did what you suggested, and torch.cuda.is_available() was still true. However, in C++ the model seems to stall forever on the ->forward call. No error, just taking forever.
Oh, I see. Not completely related, but we found something really interesting. On Android the jit.script() saved model inference runs in about a sec, on iOS it was 30 sec, so on iOS we use the traced model, which turns out to be running at about the same speed as on Android. Internally both platforms use the same C++ library, so I'm wondering what may cause such difference.
Glad to have finally resolved this. I'm using TouchDesigner as my environment that loads and executes from my DLL. It turns out I needed to paste ALL the libtorch DLLs in a location that TouchDesigner itself would load them C:/Program Files/Derivative/TouchDesigner/bin
, not just when my custom DLL is loaded from Documents/Derivative/Plugins
I started a discussion here https://discuss.pytorch.org/t/debugging-runtime-error-module-forward-inputs-libtorch-1-4/82415
I modified
u2net_test.py
and used torch.jit.trace to save a moduleThen in c++
The error:
The error is at https://github.com/pytorch/pytorch/blob/4c0bf93a0e61c32fd0432d8e9b6deb302ca90f1e/torch/csrc/jit/api/module.h#L112 It says inputs has size 0. I don't know if that's the cause of the exception or a result.
Do you have advice about running U-2-Net in C++? Thank you.