Neargye / hello_tf_c_api

Neural Network TensorFlow C API
MIT License
468 stars 134 forks source link

image prediction #2

Open betterhalfwzm opened 6 years ago

betterhalfwzm commented 6 years ago

how to load images and prediction?Thx

Neargye commented 6 years ago

You need to load the image as an array of float, load this array into the input tensor, and run session.

Later I will add an example of a images prediction.

lesley9999 commented 5 years ago

@Neargye. Did you add the images prediction example? Will like to do object detection in c, but there is no docs regarding the c api. Thanx. Or maybe a guide on how to convert https://github.com/lysukhin/tensorflow-object-detection-cpp to c.

tika64208 commented 5 years ago

You need to load the image as an array of float, load this array into the input tensor, and run session.

Later I will add an example of a images prediction.

do you have an example of a image prediction?

Xonxt commented 5 years ago

You need to load the image as an array of float, load this array into the input tensor, and run session.

Later I will add an example of a images prediction.

Here's a working example of image prediction:

cv::Mat image = cv::imread( "d:\\image.jpg",  );

// convert image to float32
cv::Mat image32f;
image.convertTo( image32f, CV_32F );

// copy to vector:
std::vector<float> input_data;
input_data.assign( (float*) image32f.data, (float*) image32f.data + image32f.total() * image32f.channels() );

// dimensions
const std::vector<std::int64_t> input_dims = { 1, image.rows, image.cols, image.channels() };

// Tensors:
const std::vector<TF_Output> input_ops = { {TF_GraphOperationByName( graph, "input" ), 0} };
const std::vector<TF_Tensor*> input_tensors = { tf_utils::CreateTensor( TF_FLOAT, input_dims, input_data ) };
const std::vector<TF_Output> out_ops = { {TF_GraphOperationByName( graph, "output" ), 0} };
std::vector<TF_Tensor*> output_tensors = { nullptr };

// create TF session:
auto session = tf_utils::CreateSession( graph );

// run Session
const TF_Code code = tf_utils::RunSession( session, input_ops, input_tensors, out_ops, output_tensors );

// get the data:
const std::vector<std::vector<float>> data = tf_utils::GetTensorsData<float>( output_tensors );
Neargye commented 5 years ago

Hi @Xonxt, thanks for your reply. Could you please fix the code in a comment?

Can I also add an example to the repository based on your code? To be honest, I work quite a bit with pictures, so it was not so easy for me to make my own example. I mainly work with time series analyzes.

Xonxt commented 5 years ago

Hi @Xonxt, thanks for your reply. Could you please fix the code in a comment?

Can I also add an example to the repository based on your code? To be honest, I work quite a bit with pictures, so it was not so easy for me to make my own example. I mainly work with time series analyzes.

hi, @Neargye. I mean... you can, but I'd only tested that it runs and doesn't crash, I haven't tested what the result of the prediction looks like. That is, I've seen that the prediction returns a tensor of a correct expected size (in my case, a stack of 19 heatmaps of size 46x46 for an input image of size 368x368), but I haven't looked yet, if they contain what I expect them to.

EDIT: @Neargye checked it today, yeah it works as intended. The code above is enough, if we expect some kind of probability or a class number. Here's a snippet, if we expect the output of the model to also be an image:

const TF_Code code = tf_utils::RunSession( session, input_ops, input_tensors, out_ops, output_tensors );

// expected output dimensions:
const std::vector<std::int64_t> output_dims = { 1, 46, 46, 19 };

const std::vector<std::vector<float>> data = tf_utils::GetTensorsData<float>( output_tensors );

// convert the Tensor data into a cv::Mat
cv::Mat heatmaps( (int)output_dims[1], (int)output_dims[2], CV_32FC( output_dims[3]), (void*) data.at( 0 ).data() );

// 'heatmaps' is now a 46x46x19 Mat of floats, do what you want with it.