Hello,
I want to try your lib-mobile for my thesis and have some questions. I'm glad you can help me.
I manage my model in MATLAB and it works.
I tried your Android example and it works fine.
Later I tried to:
I replaced your model with mine (after changing it to protobin) and copied the weights and the coffee net.
the first difference is that my model has size 3 as an input layer.
layer {
name: "input"
type: "Input"
top: "color_crop"
input_param {
shape {
dim: 1
dim: 3
dim: 128
dim: 128
}
}
}
Before proceeding to the inference, I transformed the image acquired by the camera into a matrix w x h x 3 (rgb) in order to respect the input layer.
But when I make an inference it all crashes, because the input must be a bytearray.
Where am I wrong? Why is there this difference compared to caffe for pc?
should the library turn the image not into a matrix but into a bytearray array with dim (wxhx3) ?
Hello, I want to try your lib-mobile for my thesis and have some questions. I'm glad you can help me. I manage my model in MATLAB and it works. I tried your Android example and it works fine.
Later I tried to:
I replaced your model with mine (after changing it to protobin) and copied the weights and the coffee net.
the first difference is that my model has size 3 as an input layer. layer { name: "input" type: "Input" top: "color_crop" input_param { shape { dim: 1 dim: 3 dim: 128 dim: 128 } } }
Before proceeding to the inference, I transformed the image acquired by the camera into a matrix w x h x 3 (rgb) in order to respect the input layer.
But when I make an inference it all crashes, because the input must be a bytearray.
Where am I wrong? Why is there this difference compared to caffe for pc? should the library turn the image not into a matrix but into a bytearray array with dim (wxhx3) ?
Thanks in advance for your help..