Closed mhamdan91 closed 4 years ago
Very fair request. We had #30 for this but haven't had time to look into it.
Some quick information that hope you find useful. The inference
function runs inference on a model given a set of inputs.
inference(Onnx__ModelProto *model, Onnx__TensorProto **inputs, int nInputs)
That input is propagated towards the output, and in all_context
global variable you can access each node output.
So it shouldn't be very difficult to do. Some quick C pseudocode-ish. Have a look to the documentation for more info about the struct Onnx__TensorProto
.
int main()
{
// Create your tensor and alloc some mem
Onnx__TensorProto *inp0set0 = ...;
// Allocate memory for dims, float_data,...
inp0set0->float_data[0] = 10;
inp0set0->float_data[1] = 10;
// Note that you have to also fill the n_dims, dims, name
inp0set0->name = ...;
/* Open your onnx model */
Onnx__ModelProto *model = openOnnxFile("model.onnx");
/* Set the input name */
inp0set0->name = model->graph->input[0]->name;
/* Create the array of inputs to the model */
Onnx__TensorProto *inputs[] = { inp0set0 };
/* Resolve all inputs and operators */
resolve(model, inputs, 1);
/* Run inference on your input */
Onnx__TensorProto **output = inference(model, inputs, 1);
/* Print the last output which is the model output */
for (int i = 0; i < all_context[_populatedIdx].outputs[0]->n_float_data; i++){
printf("n_float_data[%d] = %f\n", i, all_context[_populatedIdx].outputs[0]->float_data[i]);
}
}
Let me know if you need further assistance. Have a look into inference.c
file and `connxr.c
, you can reuse some stuff. And btw, if you develop one example would be great to the examples folder or existing documentation.
Edit: The code was wrong. Forgot to call the resolve function.
Just edited the above code, there was a mistake.
Closing this due to inactivity, let us know if you need further support.
Need a thorough example showing how to do inference on an onnx model in C. Would be nice if it is possible to test it with custom input instead of the ph files.