xunzixunzi / ImagePlayer

图片自动播放软件
MIT License
2 stars 0 forks source link

关于YOLOv8-TensorRT-CPP #1

Open Lishumuzixin opened 9 months ago

Lishumuzixin commented 9 months ago

你好,由于没地方问,所以只好在这个下面问一下,希望你不要介意。我想在windows上部署yolov8,请问你修改的项目YoloV8 TensorRT CPP是可以部署在windows上的把?请问有详细的在windows上的操作么?这个项目中的lib/tensorrt-cpp-api文件夹是下载你修改后的还是下载原作者的? 谢谢

xunzixunzi commented 9 months ago

全部下载我的是可以部署成功的,这是我的环境:

Lishumuzixin commented 9 months ago

我运行你的tensorrt-cpp-api这个项目是可以的。但是YOLOv8-TensorRT-CPP这个不行,就是说需要把你的tensorrt-cpp-api这个项目放到YOLOv8-TensorRT-CPP/libs中么?

xunzixunzi commented 9 months ago

对的

Lishumuzixin commented 9 months ago

我对vs2019使用不太熟悉,请问将哪个当为这个启动项目呀?谢谢你了! image

xunzixunzi commented 9 months ago

image

这两个都可以的

xunzixunzi commented 9 months ago

编译后运行时,会有使用示例,按照示例来就行

image

Lishumuzixin commented 9 months ago

我的vs总是报错,是因为opencv的问题么?todo里修改opencv的路径为D:\software\opencv480\build,是不需要带cuda的opencv么? image

xunzixunzi commented 9 months ago

我不知道是不是bug了,我这边没法上传图片。

你先把视图里的输出窗口打开,然后重新生成整个项目,把输出窗口中的生成信息发过来,我看看是为啥

Lishumuzixin commented 9 months ago

应该是debug问题,改成release就可以成功了。应该是哪里配置有问题。除此,我还有个问题,我不想用命令行直接运行,可以直接在vs中点击运行启动整个项目么?那应该改哪里呀?谢谢你的回答,帮了我很大的忙! image

xunzixunzi commented 9 months ago

那需要改代码,我改一个示例给你

Lishumuzixin commented 9 months ago

谢谢大佬!

xunzixunzi commented 9 months ago
#include "yolov8.h"
#include "cmd_line_util.h"

// Runs object detection on an input image then saves the annotated image to disk.
int main(int argc, char *argv[]) {
    // 1. 填写引擎选项:
    EngineOptions engineOptions;
    engineOptions.precision = Precision::FP16;          
    // engineOptions.calibrationDataDirectoryPath = "";    // 精度设置为 INT8 时,才需要设置这个
    // engineOptions.calibrationBatchSize = 128;           // 精度设置为 INT8 时,才需要设置这个
    engineOptions.optBatchSize = 1;     
    engineOptions.maxBatchSize = 1;     
    engineOptions.deviceIndex = 0;      

    YoloV8Config yoloV8Config;
    yoloV8Config.probabilityThreshold = 0.25f;
    yoloV8Config.nmsThreshold = 0.65f;
    yoloV8Config.topK = 100;
    // 分割用到参数:
    yoloV8Config.segChannels = 32;
    yoloV8Config.segH = 160;
    yoloV8Config.segW = 160;
    yoloV8Config.segmentationThreshold = 0.5f;
    // 关键点检测用到的参数
    yoloV8Config.numKPS = 17;
    yoloV8Config.kpsThreshold = 0.5;
    // 分类和分割用到的标签信息,示例使用的时 Coco 数据集的标签
    yoloV8Config.classNames = {
        "person", "bicycle", "car", "motorcycle", "airplane", "bus", "train", "truck", "boat", "traffic light",
        "fire hydrant", "stop sign", "parking meter", "bench", "bird", "cat", "dog", "horse", "sheep", "cow",
        "elephant", "bear", "zebra", "giraffe", "backpack", "umbrella", "handbag", "tie", "suitcase", "frisbee",
        "skis", "snowboard", "sports ball", "kite", "baseball bat", "baseball glove", "skateboard", "surfboard",
        "tennis racket", "bottle", "wine glass", "cup", "fork", "knife", "spoon", "bowl", "banana", "apple",
        "sandwich", "orange", "broccoli", "carrot", "hot dog", "pizza", "donut", "cake", "chair", "couch",
        "potted plant", "bed", "dining table", "toilet", "tv", "laptop", "mouse", "remote", "keyboard", "cell phone",
        "microwave", "oven", "toaster", "sink", "refrigerator", "book", "clock", "vase", "scissors", "teddy bear",
        "hair drier", "toothbrush"
    };

    const std::string onnxModelPath = "修改成 onnx 模型的路径";       // 修改成你的
    const std::string inputImage = "修改成待预测图像的路径";         // 修改成你的

    // 加载模型文件
    YoloV8 yoloV8;
    if (!yoloV8.loadEngine(onnxModelPath, engineOptions, yoloV8Config))
    {
        std::cout << "Error: Unable to load onnx model at path: " << onnxModelPath << std::endl;
        return -1;
    }

    // 加载图像
    auto img = cv::imread(inputImage);
    if (img.empty()) {
        std::cout << "Error: Unable to read image at path '" << inputImage << "'" << std::endl;
        return -1;
    }

    // 推理
    std::vector<InferenceObject> inferenceObjects;
    if (!yoloV8.infer(img, inferenceObjects))
    {
        std::cout << "Inference failure.";
        return -1;
    }

    // 将推理结果画出来
    yoloV8.drawObjectLabels(img, inferenceObjects);

    std::cout << "Detected " << inferenceObjects.size() << " objects" << std::endl;

    cv::imshow("result", img);
    cv::waitKey(0);

    return 0;
}
Lishumuzixin commented 9 months ago

谢谢

Lishumuzixin commented 9 months ago
#include "yolov8.h"
#include "cmd_line_util.h"

// Runs object detection on an input image then saves the annotated image to disk.
int main(int argc, char *argv[]) {
    // 1. 填写引擎选项:
    EngineOptions engineOptions;
    engineOptions.precision = Precision::FP16;          
    // engineOptions.calibrationDataDirectoryPath = "";    // 精度设置为 INT8 时,才需要设置这个
    // engineOptions.calibrationBatchSize = 128;           // 精度设置为 INT8 时,才需要设置这个
    engineOptions.optBatchSize = 1;     
    engineOptions.maxBatchSize = 1;     
    engineOptions.deviceIndex = 0;      

    YoloV8Config yoloV8Config;
    yoloV8Config.probabilityThreshold = 0.25f;
    yoloV8Config.nmsThreshold = 0.65f;
    yoloV8Config.topK = 100;
    // 分割用到参数:
    yoloV8Config.segChannels = 32;
    yoloV8Config.segH = 160;
    yoloV8Config.segW = 160;
    yoloV8Config.segmentationThreshold = 0.5f;
    // 关键点检测用到的参数
    yoloV8Config.numKPS = 17;
    yoloV8Config.kpsThreshold = 0.5;
    // 分类和分割用到的标签信息,示例使用的时 Coco 数据集的标签
    yoloV8Config.classNames = {
        "person", "bicycle", "car", "motorcycle", "airplane", "bus", "train", "truck", "boat", "traffic light",
        "fire hydrant", "stop sign", "parking meter", "bench", "bird", "cat", "dog", "horse", "sheep", "cow",
        "elephant", "bear", "zebra", "giraffe", "backpack", "umbrella", "handbag", "tie", "suitcase", "frisbee",
        "skis", "snowboard", "sports ball", "kite", "baseball bat", "baseball glove", "skateboard", "surfboard",
        "tennis racket", "bottle", "wine glass", "cup", "fork", "knife", "spoon", "bowl", "banana", "apple",
        "sandwich", "orange", "broccoli", "carrot", "hot dog", "pizza", "donut", "cake", "chair", "couch",
        "potted plant", "bed", "dining table", "toilet", "tv", "laptop", "mouse", "remote", "keyboard", "cell phone",
        "microwave", "oven", "toaster", "sink", "refrigerator", "book", "clock", "vase", "scissors", "teddy bear",
        "hair drier", "toothbrush"
    };

    const std::string onnxModelPath = "修改成 onnx 模型的路径";       // 修改成你的
    const std::string inputImage = "修改成待预测图像的路径";         // 修改成你的

    // 加载模型文件
    YoloV8 yoloV8;
    if (!yoloV8.loadEngine(onnxModelPath, engineOptions, yoloV8Config))
    {
        std::cout << "Error: Unable to load onnx model at path: " << onnxModelPath << std::endl;
        return -1;
    }

    // 加载图像
    auto img = cv::imread(inputImage);
    if (img.empty()) {
        std::cout << "Error: Unable to read image at path '" << inputImage << "'" << std::endl;
        return -1;
    }

    // 推理
    std::vector<InferenceObject> inferenceObjects;
    if (!yoloV8.infer(img, inferenceObjects))
    {
        std::cout << "Inference failure.";
        return -1;
    }

    // 将推理结果画出来
    yoloV8.drawObjectLabels(img, inferenceObjects);

    std::cout << "Detected " << inferenceObjects.size() << " objects" << std::endl;

    cv::imshow("result", img);
    cv::waitKey(0);

    return 0;
}

vs2019上debug下,运行到if (!yoloV8.loadEngine(onnxModelPath, engineOptions, yoloV8Config)) { std::cout << "Error: Unable to load onnx model at path: " << onnxModelPath << std::endl; return -1; }总是显示访问冲突?大佬知道解决方法么?

xunzixunzi commented 9 months ago

不清楚,写程序基本不用 Debug 模式,所以只在 Release 模式测试过😂

Lishumuzixin commented 8 months ago

不清楚,写程序基本不用 Debug 模式,所以只在 Release 模式测试过😂

大佬,如果yolov8的onnx导出设置dynamic=true.后续在转成engine,为什么一直报错啊?其实我是想设置为6的,但是c++中貌似input0Batch只能是1或者-1.另外,报错如下。 Model supports dynamic batch size 3: [optimizationProfile.cpp::nvinfer1::OptimizationProfile::setDimensions::119] Error Code 3: API Usage Error (Parameter check failed at: optimizationProfile.cpp::nvinfer1::OptimizationProfile::setDimensions::119, condition: std::all_of(dims.d, dims.d + dims.nbDims, [](int32_t x) noexcept { return x >= 0; }) ) 3: [optimizationProfile.cpp::nvinfer1::OptimizationProfile::setDimensions::119] Error Code 3: API Usage Error (Parameter check failed at: optimizationProfile.cpp::nvinfer1::OptimizationProfile::setDimensions::119, condition: std::all_of(dims.d, dims.d + dims.nbDims, [](int32_t x) noexcept { return x >= 0; }) ) 3: [optimizationProfile.cpp::nvinfer1::OptimizationProfile::setDimensions::119] Error Code 3: API Usage Error (Parameter check failed at: optimizationProfile.cpp::nvinfer1::OptimizationProfile::setDimensions::119, condition: std::all_of(dims.d, dims.d + dims.nbDims, [](int32_t x) noexcept { return x >= 0; }) ) 4: [network.cpp::nvinfer1::Network::validate::3189] Error Code 4: Internal Error (images: dynamic input is missing dimensions in profile 0.)

xunzixunzi commented 8 months ago

这个地方你想临时解决的话,把 engine.cpp build() 函数里加上两行:

    // Register a single optimization profile
    IOptimizationProfile* optProfile = builder->createOptimizationProfile();
    for (int32_t i = 0; i < numInputs; ++i) {
        // Must specify dimensions for all the inputs the model expects.
        const auto input = network->getInput(i);
        const auto inputName = input->getName();
        const auto inputDims = input->getDimensions();
        int32_t inputC = inputDims.d[1];
        int32_t inputH = inputDims.d[2];
        int32_t inputW = inputDims.d[3];
        // 临时解决
        inputH = 640;
        inputW = 640;

        // Specify the optimization profile`
        optProfile->setDimensions(inputName, OptProfileSelector::kMIN, Dims4(1, inputC, inputH, inputW));
        optProfile->setDimensions(inputName, OptProfileSelector::kOPT, Dims4(engineOptions.optBatchSize, inputC, inputH, inputW));
        optProfile->setDimensions(inputName, OptProfileSelector::kMAX, Dims4(engineOptions.maxBatchSize, inputC, inputH, inputW));
    }
    config->addOptimizationProfile(optProfile);

当时发现了,但是忘记给原作者提 PR 了,我去提个 PR,让原作者看看具体咋回事

xunzixunzi commented 8 months ago

对了,还有个地方需要改, engine.cpp 的 runInference() 函数,改成这样

    // Ensure all dynamic bindings have been defined.
    // TODO: Should use allInputShapesSpecified()
    if (!m_context->allInputShapesSpecified()) {
        throw std::runtime_error("Error, not all required dimensions specified.");
    }
Lishumuzixin commented 8 months ago

对了,还有个地方需要改, engine.cpp 的 runInference() 函数,改成这样

    // Ensure all dynamic bindings have been defined.
    // TODO: Should use allInputShapesSpecified()
    if (!m_context->allInputShapesSpecified()) {
        throw std::runtime_error("Error, not all required dimensions specified.");
    }

谢谢!!!非常感谢

Lishumuzixin commented 8 months ago

因为我要将输入的一张图,分割成640*640,然后放在一个批次进行推断,大概是6个批次。 image 我把preprocess这样改写后,发现后面 const auto numInputs = m_inputDims.size();//输入向量维度 if (inputs.size() != numInputs) { std::cout << "===== Error =====" << std::endl; std::cout << "Incorrect number of inputs provided!" << std::endl; return false; }这一部分会报错,m_inputDims.size()为1 ,inputs.size为6.是我preprocess函数inputs里有问题么?