cyrusbehr / tensorrt-cpp-api

TensorRT C++ API Tutorial
MIT License
577 stars 72 forks source link

About extracting inference results #12

Closed MingiKang closed 1 year ago

MingiKang commented 1 year ago

// Populate the input vectors const auto& inputDims = engine.getInputDims(); std::vector<std::vector> inputs;

// TODO:
// For the sake of the demo, we will be feeding the same image to all the inputs
// You should populate your inputs appropriately.
for (const auto & inputDim : inputDims) {
    std::vector<cv::cuda::GpuMat> input;
    for (size_t j = 0; j < batchSize; ++j) {
        cv::cuda::GpuMat resized;
        // TODO:
        // You can choose to resize by scaling, adding padding, or a combination of the two in order to maintain the aspect ratio
        // You can use the Engine::resizeKeepAspectRatioPadRightBottom to resize to a square while maintain the aspect ratio (adds padding where necessary to achieve this).
        // If you are running the sample code using the suggested model, then the input image already has the correct size.
        // The following resizes without maintaining aspect ratio so use carefully!
        cv::cuda::resize(img, resized, cv::Size(inputDim.d[2], inputDim.d[1])); // TRT dims are (height, width) whereas OpenCV is (width, height)
        input.emplace_back(std::move(resized));
    }
    inputs.emplace_back(std::move(input));
}

//std::array<float, 3> subVals {0.5f, 0.5f, 0.5f}; //std::array<float, 3> divVals {0.5f, 0.5f, 0.5f}; --> 1 detected after execution abnormal operation const std::array<float, 3> subVals { 0.f, 0.f, 0.f }; const std::array<float, 3> divVals { 1.f, 1.f, 1.f }; --> 13 detected after execution in normal operation bool normalize = true; std::vector<std::vector<std::vector>> featureVectors; bool succ = runInference(inputs, featureVectors, subVals, divVals, normalize );

13 detected after execution

featureVectors[0][0][0] -> num_dets value = 1.962e-44#DEN strange value -> Expected to be 13 is normal

featureVectors[0][1][0] -> det_boxes value = 261.750000 x normal value featureVectors[0][1][1] -> det_boxes value = 39.4375000 y normal value featureVectors[0][1][2] -> det_boxes value = 301.250000 x + width normal value featureVectors[0][1][3] -> det_boxes value = 78.8750000 y + height normal value ...

featureVectors[0][2[[0] -> det_scores value = 0.910644531 normal value featureVectors[0][2[[1] -> det_scores value = 0.903320313 normal value ...

featureVectors[0][3[[0] -> det_classes value = 2.803e-45#DEN strange value -> A value between 0 and 79 is normal featureVectors[0][3[[1] -> det_classes value = 1.037e-43#DEN strange value -> A value between 0 and 79 is normal ...

I would appreciate it if you could tell me how to extract the inference result value normally.

cyrusbehr commented 1 year ago

Can you provide the onnx model.

MingiKang commented 1 year ago

I can't upload it because it's too big.

However, the yolov7.pt file is converted to yolov7.onnx.

The options are below.

pt -> onnx export.py --weights ./yolov7.pt --grid --end2end --dynamic-batch --simplify --topk-all 100 --iou-thres 0.65 --conf-thres 0.35 --img-size 640 640 --batch-size 4 --device "0,1"

Note that https://github.com/WongKinYiu/yolov7 I used this version.

yoolov7.pt download path https://github.com/WongKinYiu/yolov7/release/download/v0.1/yolov7.pt

onnx -> trt trtexec --onnx=yolov7.onnx --fp16 --workspace=20000 --buildOnly --saveEngine=yolov7.trt

In addition, I converted onnx to trt with the owner's source, but the result was the same.

cyrusbehr commented 1 year ago

Have a look at my TensorRT YoloV8 project where I use this repo to run inference behind the scenes. https://github.com/cyrusbehr/YOLOv8-TensorRT-CPP

MingiKang commented 1 year ago

I also ran it for yolov8 you linked.

But the output format is different.

If the existing ones are not output as num_dets, det_boxes, det_scores, and det_class values, I know that the linked source contains the boxes/scores/classes values ​​in one array.

Another question is why the values ​​are out of order. Is there any way to get the value to come out normally?

cyrusbehr commented 1 year ago

@MingiKang sorry for the delay. Looking over your comment again, I see that you included this in your conversion code: --end2end

end2end means the model is exported with built in NMS. My implementation does not support that, as the NMS step is done programatically external to the model. You should therefore re-export without that flag and try again.

I will close this issue for now, but if you continue to have problems, feel free to reopen.