ultralytics / ultralytics

NEW - YOLOv8 🚀 in PyTorch > ONNX > OpenVINO > CoreML > TFLite
https://docs.ultralytics.com
GNU Affero General Public License v3.0
24.46k stars 4.86k forks source link

model.predictor returns None #12692

Closed PatilMayurS closed 1 week ago

PatilMayurS commented 2 weeks ago

Search before asking

Question

model.predictor returns None instead of "ultralytics.models.yolo.detect.predict.DetectionPredictor"

from ultralytics import YOLO
model = YOLO("yolov8n.pt")
model.predictor  

returns None.

I am trying to update model.predictor.inference method to custom function with openvino gpu compiled model like below:

def infer(*args):
    result = det_compiled_model(args)
    return torch.from_numpy(result[0])

det_model.predictor.inference = infer
det_model.predictor.model.pt = False

Additional

No response

github-actions[bot] commented 2 weeks ago

👋 Hello @PatilMayurS, thank you for your interest in Ultralytics YOLOv8 🚀! We recommend a visit to the Docs for new users where you can find many Python and CLI usage examples and where many of the most common questions may already be answered.

If this is a 🐛 Bug Report, please provide a minimum reproducible example to help us debug it.

If this is a custom training ❓ Question, please provide as much information as possible, including dataset image examples and training logs, and verify you are following our Tips for Best Training Results.

Join the vibrant Ultralytics Discord 🎧 community for real-time conversations and collaborations. This platform offers a perfect space to inquire, showcase your work, and connect with fellow Ultralytics users.

Install

Pip install the ultralytics package including all requirements in a Python>=3.8 environment with PyTorch>=1.8.

pip install ultralytics

Environments

YOLOv8 may be run in any of the following up-to-date verified environments (with all dependencies including CUDA/CUDNN, Python and PyTorch preinstalled):

Status

Ultralytics CI

If this badge is green, all Ultralytics CI tests are currently passing. CI tests verify correct operation of all YOLOv8 Modes and Tasks on macOS, Windows, and Ubuntu every 24 hours and on every commit.

glenn-jocher commented 1 week ago

Hello! It looks like you're trying to access the predictor attribute directly from the YOLO model instance, which isn't initialized by default. You need to explicitly create a DetectionPredictor instance and pass the necessary arguments. Here's how you can do it:

from ultralytics import YOLO
from ultralytics.models.yolo.detect import DetectionPredictor

model = YOLO("yolov8n.pt")
predictor = DetectionPredictor(model=model)

Now, you can modify the inference method of your predictor as needed:

def infer(*args):
    result = det_compiled_model(args)
    return torch.from_numpy(result[0])

predictor.inference = infer

This should set up your custom inference function correctly. Let me know if you need further assistance! 😊

PatilMayurS commented 1 week ago

Hello! It looks like you're trying to access the predictor attribute directly from the YOLO model instance, which isn't initialized by default. You need to explicitly create a DetectionPredictor instance and pass the necessary arguments. Here's how you can do it:

from ultralytics import YOLO
from ultralytics.models.yolo.detect import DetectionPredictor

model = YOLO("yolov8n.pt")
predictor = DetectionPredictor(model=model)

Now, you can modify the inference method of your predictor as needed:

def infer(*args):
    result = det_compiled_model(args)
    return torch.from_numpy(result[0])

predictor.inference = infer

This should set up your custom inference function correctly. Let me know if you need further assistance! 😊

Thanks Glenn. Yes, you are correct. I had tried this method and it worked.

glenn-jocher commented 1 week ago

Great to hear that it worked for you! If you have any more questions or need further assistance as you continue with your project, feel free to reach out. Happy coding! 😊