Closed Roios closed 1 year ago
I haven't tried running the models with the C++ API but this can be interesting, it might improve the runtime performance. Can you elaborate what doesn't work? Are you getting any errors?
So my idea was simply to convert the model using the script option of pytorch as I do for other models. From there, load it in my C++ program and do the inference. From what I was able to check, the network architecture as it is, is not scriptable. I didn't dig enough on it to point exactly to the problem root. I do not believe it would be much faster but would open more doors to be used on other programs.
You can try exporting to onnx instead.
I am closing this issue, you can reopen it if it's still relevant
First, great work !
I was trying to train in Python and save it for C++ inference. The classic approach doesn't work:
Do you have any suggestion on how to do it?