Hello @sdcb. I wanted to inquire about something. I'm in the process of transitioning from GPU to CPU, and for this, I've built a custom version of Paddle tailored for CPU usage, incorporating ONNX runtime. Here are the cmake configuration flags I used:
var recognitionModel = LocalRecognizationModel.EnglishV3;
var config = PaddleDevice.Onnx();
var predictor = recognitionModel.CreateConfig().Apply(config).CreatePredictor();
I'm uncertain if this is the correct method for configuring ONNX. While it seems to function, I'm not completely sure if it's actually running the ONNX version. I didn't perform any explicit model conversions and couldn't locate any ONNX files on the machine where this code is executed. Could you offer some guidance? Should I carry out an explicit model conversion and then instantiate a recognition model using this ONNX file, or is the code I have already sufficient, with Paddle handling the rest?
Additional into:
I found in a logs:
It seems to be converting by itself.
However where can I find this model? is there ay way to set the caching folder for it? As I running everything in Lambda, and don't want to consume time for conversion each lambda call, and store it in cache. May you advice?
Hello @sdcb. I wanted to inquire about something. I'm in the process of transitioning from GPU to CPU, and for this, I've built a custom version of Paddle tailored for CPU usage, incorporating ONNX runtime. Here are the cmake configuration flags I used:
Following this, I employed the setup as follows:
I'm uncertain if this is the correct method for configuring ONNX. While it seems to function, I'm not completely sure if it's actually running the ONNX version. I didn't perform any explicit model conversions and couldn't locate any ONNX files on the machine where this code is executed. Could you offer some guidance? Should I carry out an explicit model conversion and then instantiate a recognition model using this ONNX file, or is the code I have already sufficient, with Paddle handling the rest?
Additional into: I found in a logs:
It seems to be converting by itself. However where can I find this model? is there ay way to set the caching folder for it? As I running everything in Lambda, and don't want to consume time for conversion each lambda call, and store it in cache. May you advice?