Open SiR0N opened 4 years ago
can you clarify what you are using to run the ONNX model?
Hi, sorry I forgot to mention it:
I use the C API of the runtime (with JNI) I used the mnist and fer models without problems
The flow is: 1)(Java) getFloatArrayFromResizedImage -> array 2) (Java) "send" array to C 3) (C) run the onnx model with array -> array2 4) (C) "send" array2 to java 5) (Java) postProcess(array2)
This is the input and output info of yolo I got by the C API:
INPUTs INFO:
Number of Inputs 1
Input 0 Name: image
Input 0 : type = 1
Input 0 : num_dims = 4
Input 0 : dim 0 = 1 //if not input_node_dims[0] = 1 the value is -1 and I got wrong final size;
Input 0 : dim 1 = 3
Input 0 : dim 2 = 416
Input 0 : dim 3 = 416
INPUT TENSOR (0) Size = 519168
TENSOR image (0) is a TENSOR
ALL TENSORS HAVE BEEN CREATED, Let's RUN!!!!
OUTPUTs INFO:
Number of Outputs = 1
Output 0 Name: grid
Output 0 : type = 1
Output 0 : num_dims = 4
Output 0 : dim 0 = 1 //if not output_node_dims[0] = 1 the value is -1 and I got wrong final size;
Output 0 : dim 1 = 125
Output 0 : dim 2 = 13
Output 0 : dim 3 = 13
OUTPUT TENSOR (0) Size: 21125
RUN!!!!
OUTPUT TENSOR 0 is a TENSOR
How to get input/output sizes:
input_node_dims[0] = 1;
size_t input_tensor_size = 1;
for (size_t j = 0; j < num_dims; j++) {
input_tensor_size = input_tensor_size * input_node_dims[j];
__android_log_print(android_LogPriority::ANDROID_LOG_ERROR,
logid,
"Input %zu : dim %zu = %jd\n", i, j, input_node_dims[j]);
}
//wrong size when: OrtGetTensorShapeElementCount(tensor_info, &input_tensor_size);
How to create the tensor:
OrtValue* input_tensor = nullptr;
ost = OrtCreateTensorWithDataAsOrtValue(
ort_info,
img,//output of: getFloatArrayFromResizedImage
input_tensor_size * sizeof(type),
reinterpret_cast<const int64_t *>(input_node_dims.data()),
input_node_dims.size(), //num_dim
type,
&input_tensor);
How to run :
std::vector<OrtValue*> ortOutput(num_output_nodes);
ost = OrtRun( ort_ses, nullptr,
input_node_names.data(),
input_tensor_list.data()
num_input_nodes,
output_node_names.data(),
num_output_nodes,
ortOutput.data());
//get values
float* data = nullptr; //I get the postProcess with this array, I "send" it to java
ost = OrtGetTensorMutableData(ortOutput[i], reinterpret_cast<void **>(&data));
Hi, I checked the model on python and it seems for me that the model is wrong and it has no weights.
I did on python due that I was no sure if my post/preprocessing was right and I got the same results as I had with the previous code, therefore I think that something is wrong in the model.
I did more or less the same as here
but I use onnx runtime:
def inference(sess, preprocessed_image):
input_name = sess.get_inputs()[0].name
output_name = sess.get_outputs()[0].name
predictions = sess.run([output_name], {input_name: preprocessed_image})
return predictions
sess = rt.InferenceSession("yolo-Model.onnx")
predictions = inference(sess, preprocessed_image)
Can anyone check the model to see if it is right?
@jiafatom could you help take a look?
Some info here: This model was converted from a Core ML version of Tiny YOLO, around 1.5 years ago.
@jiafatom should we update it to be directly converted instead?
@jiafatom should we update it to be directly converted instead?
Tiny YoloV3 has been added in the model zoo lately. @SiR0N could you try this one?
Should we always update a model with the latest version? Not sure the old version is still urged if the latest one is ready to use.
I just tested the tiny yolov2 onnx model (opset 8) with test data, and it works good for me. The model has weights, so is that possible there is something wrong with your script? In another issue, you also have similar problem for tinyyolov3, can you check my example there?
Hi @EmmaNingMS, @jiafatom
I tried the code that @jiafatom shared with me and it worked for me (yolov3) but I would like to use the tiny yolov2 as I have already the data processing done in JAVA (as I described before).
@jiafatom Can you share with me the tiny yolo2 implementation? There is a (hight) chance to find something wrong in my script but I do not find it, I would say that something is wrong in the preprocessing.
Now I am not sure with version of tiny yolov2 I use, I will take the opset 8 and I will try again.
I just checked the tiny yolov2 (opset 8) and did not work :( (neither in the JAVA/C code nor Python )
Hello
I want to use the onnx tiny yolo v2 in Android https://github.com/onnx/models/tree/master/vision/object_detection_segmentation/tiny_yolov2
My implementation is based on this one: https://github.com/tensorflow/tensorflow/blob/master/tensorflow/examples/android/src/org/tensorflow/demo/TensorFlowYoloDetector.java
I made it work but I always get (almost) the same output with different inputs (I use the VOC dataset):
That's the output of this picture:
I am not sure where is the problem, if I do the pre/post-processing wrong or the model is not correct. Because I tried to use the model from the original website with the same images and it works nice.
This is my preprocessing:
I know that the input format is NCHW, in this case, 1x3x416x416, I just wonder what it means, should I feed the model with an 1D array of size 3x416x416 in this format [R,G,B,R,G,B,R,G,B....] (I use this right now) or this one: [R,R,R......, G,G,G.....B,B,B]??
Post-processing:
do I do anything wrong? it seems to me that there is something wrong in the model weights because no matter the input I get almost the same output.