ultralytics / ultralytics

NEW - YOLOv8 🚀 in PyTorch > ONNX > OpenVINO > CoreML > TFLite
https://docs.ultralytics.com
GNU Affero General Public License v3.0
23.76k stars 4.74k forks source link

Yolov8 not working in example ultralytics/examples/YOLOv8-CPP-Inference #11688

Open mohkan1 opened 1 week ago

mohkan1 commented 1 week ago

Search before asking

YOLOv8 Component

Other

Bug

The project structure of ultralytics/examples/YOLOv8-CPP-Inference

image

The code in the example ultralytics/examples/YOLOv8-CPP-Inference/main.cpp


#include <iostream>
#include <vector>
#include <getopt.h>

#include <opencv2/opencv.hpp>

#include "inference.h"

using namespace std;
using namespace cv;

int main(int argc, char ** argv)
{
    std::string projectBasePath =
      "/home/bober/Desktop/douchebag/ultralytics/examples/YOLOv8-CPP-Inference/";                             // Set your ultralytics base path

    bool runOnGPU = true;

    //
    // Pass in either:
    //
    // "yolov8s.onnx" or "yolov5s.onnx"
    //
    // To run Inference with yolov8/yolov5 (ONNX)
    //

    // Note that in this example the classes are hard-coded and 'classes.txt' is a place holder.
    Inference inf(projectBasePath + "yolov5s.onnx", cv::Size(640, 480),
      projectBasePath + "classes.txt",
      runOnGPU);

    std::vector<std::string> imageNames;
    imageNames.push_back(projectBasePath + "bus.jpg");
    imageNames.push_back(projectBasePath + "zidane.jpg");

    for (int i = 0; i < imageNames.size(); ++i) {
        cv::Mat frame = cv::imread(imageNames[i]);

        // Inference starts here...
        std::vector<Detection> output = inf.runInference(frame);

        int detections = output.size();
        std::cout << "Number of detections:" << detections << std::endl;

        for (int i = 0; i < detections; ++i) {
            Detection detection = output[i];

            cv::Rect box = detection.box;
            cv::Scalar color = detection.color;

            // Detection box
            cv::rectangle(frame, box, color, 2);

            // Detection box text
            std::string classString = detection.className + ' ' + std::to_string(
                detection.confidence).substr(0, 4);
            cv::Size textSize = cv::getTextSize(classString, cv::FONT_HERSHEY_DUPLEX, 1, 2, 0);
            cv::Rect textBox(box.x, box.y - 40, textSize.width + 10, textSize.height + 20);

            cv::rectangle(frame, textBox, color, cv::FILLED);
            cv::putText(
                frame, classString, cv::Point(
                    box.x + 5,
                    box.y - 10), cv::FONT_HERSHEY_DUPLEX, 1, cv::Scalar(
                    0, 0, 0), 2, 0);
        }
        // Inference ends here...

        // This is only for preview purposes
        float scale = 0.8;
        cv::resize(frame, frame, cv::Size(frame.cols * scale, frame.rows * scale));
        cv::imshow("Inference", frame);

        cv::waitKey(-1);
    }
}

When using yolov5, it yeilds the following results:

image

But when using the yolov8, it yields the following:

image

Any idea why the model yolov8 not working here ?

Environment

No response

Minimal Reproducible Example

No response

Additional

No response

Are you willing to submit a PR?

github-actions[bot] commented 1 week ago

👋 Hello @mohkan1, thank you for your interest in Ultralytics YOLOv8 🚀! We recommend a visit to the Docs for new users where you can find many Python and CLI usage examples and where many of the most common questions may already be answered.

If this is a 🐛 Bug Report, please provide a minimum reproducible example to help us debug it.

If this is a custom training ❓ Question, please provide as much information as possible, including dataset image examples and training logs, and verify you are following our Tips for Best Training Results.

Join the vibrant Ultralytics Discord 🎧 community for real-time conversations and collaborations. This platform offers a perfect space to inquire, showcase your work, and connect with fellow Ultralytics users.

Install

Pip install the ultralytics package including all requirements in a Python>=3.8 environment with PyTorch>=1.8.

pip install ultralytics

Environments

YOLOv8 may be run in any of the following up-to-date verified environments (with all dependencies including CUDA/CUDNN, Python and PyTorch preinstalled):

Status

Ultralytics CI

If this badge is green, all Ultralytics CI tests are currently passing. CI tests verify correct operation of all YOLOv8 Modes and Tasks on macOS, Windows, and Ubuntu every 24 hours and on every commit.

glenn-jocher commented 1 week ago

@mohkan1 hey there! Thanks for providing detailed information about the issue you're encountering with YOLOv8 in the YOLOv8-CPP-Inference example.

From your description, it looks like when switching models from YOLOv5 to YOLOv8, you encounter distinct results or possibly an error. A common cause of such problems can be related to differences in model inputs and outputs structure, or ONNX model conversion specificities.

Let's start troubleshooting with the following:

  1. Model Input/Output Check: Ensure the model inputs and outputs are correctly configured for YOLOv8. Differences in input dimensions or preprocessing could cause issues.

  2. ONNX Model Verification: Double-check that the YOLOv8 ONNX model was correctly converted and isn't corrupted. Re-export it if necessary.

  3. Code Adjustments: Make sure all model-specific parameters (e.g., input size, class names, anchors etc.) align with YOLOv8's specifications.

  4. Dependencies: Confirm that all dependencies particularly OpenCV and ONNX are up to date, as outdated versions might lead to unexpected behaviors.

If these steps don't resolve the issue, please provide any error messages or odd behaviors you notice when you switch to YOLOv8. This will help further narrow down the problem! 🛠️

mohkan1 commented 1 week ago

@glenn-jocher Thanks for your suggestions, I have solved the issue. It was wrong model input output.

But now I got a new issue and wondering if you have any idea about why this is happening and how one can solve it ?

I have sat the variable runOnGPU to true but it will always switches to CPU. image