serizba / cppflow

Run TensorFlow models in C++ without installation and without Bazel
https://serizba.github.io/cppflow/
MIT License
785 stars 178 forks source link

No such file "tensorflow/c/tf_tstring.h" in C API. #95

Closed busyyang closed 3 years ago

busyyang commented 3 years ago

When I unzip the C lib, which is download from :https://storage.googleapis.com/tensorflow/libtensorflow/libtensorflow-gpu-windows-x86_64-2.4.0.zip

But there is no file "tensorflow/c/tf_tstring.h" . And it cause an error when compiling. Can anyone know how to fix it? Or send me a C lib which can be used?

serizba commented 3 years ago

Hi,

I don't know why this is happening, but the provided include code for the windows-2.4.0 and the linux-2.4.0 are different. And it looks like the windows version is missing some files like tf_string.h.

I suggest you to use a previous version, you can just change the version number in the link that you provided. For instance, the 2.3.0 version:

https://storage.googleapis.com/tensorflow/libtensorflow/libtensorflow-gpu-windows-x86_64-2.3.0.zip

Hope it helps

busyyang commented 3 years ago

Hi serizba, thanks for your reply. I do find the tf_tstring.h in lib in Linux version. It works well if we copy the include folder to replace the original one. And I also find another way to avoid errors, if Model::restore() and Model::save() are not in user code.

  1. Comment the part of codes for functinos Model::restore() and Model::save().
  2. Comment #include "tensorflow/c/tf_tstring.h" in c_api.h
busyyang commented 3 years ago

And I also have another question. The model is loaded sucessfully when I avoid the tf_tstring.h error. But prediction result is diffient from the result form python. I use the EfficientPoseI.pb from daniegr/EfficientPose. The prediction result should be a confidence map with 16 channels. When a 256x256 color image is feeded into the model, the output should be a confidence map with size of 256x256x16, which the vaules should be from 0 to 1.

But the values from cppflow break the boundary, out of range [0,1].

busyyang commented 3 years ago

here is my test code:

#include "cppFlow\Model.h"
#include "cppFlow\Tensor.h"
#include <opencv2\opencv.hpp>"
#include<chrono>

cv::Mat preprocess_input(cv::Mat& inMat)
{
    cv::Mat outMat(inMat.rows, inMat.cols, CV_32FC3);
    for (int i = 0; i < inMat.rows; i++)
    {
        uchar *p = inMat.ptr<uchar>(i);
        float *d = outMat.ptr<float>(i);
        for (int j = 0; j < inMat.cols; j++)
        {
            d[j * 3 + 0] = (((float)p[j * 3 + 0] / 255.0F) - 0.485F) / 0.229F;
            d[j * 3 + 1] = (((float)p[j * 3 + 1] / 255.0F) - 0.456F) / 0.224F;
            d[j * 3 + 2] = (((float)p[j * 3 + 2] / 255.0F) - 0.406F) / 0.225F;
        }
    }
    return outMat;
}

int main()
{
    printf("Hello from TensorFlow C library version %s\n", TF_Version());
    Model m("EfficientPoseModel/EfficientPoseI.pb");
    //m.init();
    Tensor input_tensor{ m, "input_res1" };
    Tensor output{ m, "upscaled_confs/BiasAdd" };

    cv::Mat img = cv::imread("sift.jpg", cv::IMREAD_COLOR);
    //cv::cvtColor(img, img, cv::COLOR_BGR2RGB);
    cv::Mat img2;
    cv::resize(img, img2, cv::Size(256, 256));
    cv::Mat img_f = preprocess_input(img2);

    std::vector <float> img_data;
    img_data.assign(img_f.data, img_f.data + img_f.total()*img_f.channels());

    input_tensor.set_data(img_data, { 1,256,256,3 });
    m.run(input_tensor, output);

    for (float f : output.get_data<float>()) {
        std::cout << f << " ";
    }
    std::cout << std::endl;

    getchar();
    return 0;
}

The output in cppflow is shown as: image

The output in python code is showns as: image

serizba commented 3 years ago

Hi @busyyang ,

I would think that this has something to do with feeding the data wrongly.Could you please send the image you are using?

Also, have you tried to use the cppflow2 version? With that version is easier to read an image and convert it to float, you won't need opencv.

busyyang commented 3 years ago

Hi @serizba ,

Here is the simple picture which I use (name as sift.jpg). I would think I follow the pipeline to preprocess the feed data. The preprocess code is as below:

......
    if mode == 'torch':
        x /= 255.
        mean = [0.485, 0.456, 0.406]
        std = [0.229, 0.224, 0.225]

......

I have not try CppFlow2 yet as I thought it only work for tensorflow2.x. I will have a try on CppFlow2, but could you please have some time to see the reason for prediction difficience which I mentioned above.

serizba commented 3 years ago

How are you using the same network in python? Can you provide a small piece of code running in python to compare with?

busyyang commented 3 years ago

Hi @serizba, I use the sample code from daniegr/EfficientPose.

git clone https://github.com/daniegr/EfficientPose
cd EfficientPose
python track.py --path "path_for_test_image" --model "I" --framework "tensorflow" --visualize

and in debug mode, we could infer the confidence map from the output of function infer(batch, model, lite, framework).

In another way, I also tried the Cppflow2, but it seem to run with C++17 support. I wish it can be run in standard C++ lib.

busyyang commented 3 years ago

Hi @serizba , I have fixed the problem of difference output from C API and python. It seems that we could not use the sample codes for CV_32F Mat inputs. we need serialize the model input by hand. Here it maybe reference code for followings.

#include "cppFlow\Model.h"
#include "cppFlow\Tensor.h"
#include <opencv2\opencv.hpp>
#include <chrono>

#define Model_Input_Size 256

void preprocess_input(cv::Mat & inMat, std::vector<float>& input_data)
{
    int inMat_row = inMat.rows;
    for (int i = 0; i < inMat_row; i++)
    {
        uchar *p = inMat.ptr<uchar>(i);
        for (int j = 0; j < inMat.cols; j++)
        {
            input_data.at((i * inMat_row + j)*3 + 0) = (((float)p[j * 3 + 0] / 255.0F) - 0.485F) / 0.229F;
            input_data.at((i * inMat_row + j)*3 + 1) = (((float)p[j * 3 + 1] / 255.0F) - 0.456F) / 0.224F;
            input_data.at((i * inMat_row + j)*3 + 2) = (((float)p[j * 3 + 2] / 255.0F) - 0.406F) / 0.225F;
        }
    }
}

int main()
{
    printf("Hello from TensorFlow C library version %s\n", TF_Version());
    Model m("EfficientPoseModel/EfficientPoseI.pb");
    //m.init();
    Tensor input_tensor{ m, "input_res1" };
    Tensor output{ m, "upscaled_confs/BiasAdd" };

    std::chrono::time_point<std::chrono::system_clock> startTP = std::chrono::system_clock::now();
    cv::Mat img = cv::imread("David.jpg", cv::IMREAD_COLOR);
    cv::Size input_size = img.size();
    //cv::cvtColor(img, img, cv::COLOR_BGR2RGB);
    cv::Mat img2;
    cv::resize(img, img2, cv::Size(Model_Input_Size, Model_Input_Size));

    std::vector<float> img_data(Model_Input_Size * Model_Input_Size * 3);
    preprocess_input(img2, img_data);

    input_tensor.set_data(img_data, { 1,Model_Input_Size,Model_Input_Size,3 });
    m.run(input_tensor, output);

    std::chrono::time_point<std::chrono::system_clock> finishTP = std::chrono::system_clock::now();
    std::cout << "Time Taken in forward pass = " << std::chrono::duration_cast<std::chrono::milliseconds>(finishTP - startTP).count() << " ms" << std::endl;

    getchar();
    return 0;
}

Thanks for this awsome project and kindly reply again. As for Cppflow2, if it can run in standard C++ lib instead of C++ 17 support, it can be spread more widely.

serizba commented 3 years ago

@busyyang

Glad that you finally fixed it. I appreciate your suggestion, but I think for the moment I will keep the current implementation.