OlafenwaMoses / ImageAI

A python library built to empower developers to build applications and systems with self-contained Computer Vision capabilities
https://www.genxr.co/#products
MIT License
8.56k stars 2.19k forks source link

AttributeError: 'ObjectDetection' object has no attribute 'setModelTypeAsYOLOv3' #62

Closed angelo337 closed 6 years ago

angelo337 commented 6 years ago

hi there when i try to run the example: FirstObjectDetection.py I am getting this error:

AttributeError: 'ObjectDetection' object has no attribute 'setModelTypeAsYOLOv3'

could you please point me out some resource to solve this ? thanks angelo

OlafenwaMoses commented 6 years ago

Do you have the latest version (2.0.2) of ImageAI installed ?

angelo337 commented 6 years ago

Sure:

pip3 install https://github.com/OlafenwaMoses/ImageAI/releases/download/2.0.2/imageai-2.0.2-py3-none-any.whl Requirement already satisfied: imageai==2.0.2 from https://github.com/OlafenwaMoses/ImageAI/releases/download/2.0.2/imageai-2.0.2-py3-none-any.whl in /home/user/.local/lib/python3.5/site-packages (2.0.2)

shenyp09 commented 6 years ago

Seems the object detection in ImageAI only supports RetinaNet...

OlafenwaMoses commented 6 years ago

ImageAI supports RetinaNet, YOLOv3 and TinyYOLOv3. I will advice you uninstall ImageAI and install it again.

angelo337 commented 6 years ago

I just follow your advice and is currently working thanks a lot angelo

OlafenwaMoses commented 6 years ago

You are welcome.

ksuraj82 commented 5 years ago

i am using my resnet 152 model to detect the object but it is showing error .

ValueError: You are trying to load a weight file containing 466 layers into a model with 116 layers.

how to change the detector.setModelTypeAsRetinaNet() to resnet model.

Ashokdevarajan commented 5 years ago

i need to built object detection with other models is there any other options like imageai models like inception,mobilenet

OlafenwaMoses commented 5 years ago

Please visit the Tutorial linked belos. You can now create custom detection model with ImageAI.

https://medium.com/deepquestai/train-object-detection-ai-with-6-lines-of-code-6d087063f6ff

AfiaFaith commented 1 year ago

Hello guys,

After training my dataset on Google Colab, i am trying to apply my model to my webcam but i am getting this error:


AttributeError Traceback (most recent call last) C:\Users\AFIAFA~1\AppData\Local\Temp/ipykernel_19392/3255609503.py in 2 from imageai import Detection 3 yolo = Detection.ObjectDetection() ----> 4 yolo.setModelTypeAsYOLOv5() 5 yolo.setModelPath(modelpath) 6 yolo.loadModel()

AttributeError: 'ObjectDetection' object has no attribute 'setModelTypeAsYOLOv5'

I am new to computer vision and python.

Below is my code.

modelpath = "C:/Users/xxxxx/Downloads/project/best.pt" from imageai import Detection yolo = Detection.ObjectDetection() yolo.setModelTypeAsYOLOv5() yolo.setModelPath(modelpath) yolo.loadModel()

import cv2 cam = cv2.VideoCapture(0) #0=front-cam, 1=back-cam cam.set(cv2.CAP_PROP_FRAME_WIDTH, 1300) cam.set(cv2.CAP_PROP_FRAME_HEIGHT, 1500)

while True: ## read frames ret, img = cam.read() ## predict yolo img, preds = yolo.detectCustomObjectsFromImage(input_image=img, custom_objects=None, input_type="array", output_type="array", minimum_percentage_probability=70, display_percentage_probability=False, display_object_name=True) ## display predictions cv2.imshow("", img) ## press q or Esc to quit
if (cv2.waitKey(1) & 0xFF == ord("q")) or (cv2.waitKey(1)==27): break## close camera cam.release() cv2.destroyAllWindows()

OlafenwaMoses commented 1 year ago

@AfiaFaith ImageAI doesn't support YOLOv5 at the moment. Use setModelTypeAsYOLOv3 instead.

AfiaFaith commented 1 year ago

ok, got another error:

ModuleNotFoundError: No module named 'models'

AfiaFaith commented 1 year ago

Thank you. It's working now!

LOActualControl commented 1 year ago

Hi @AfiaFaith ! How did you end up solving the ModuleNotFoundError: No module named 'models' error? I'm getting the same error with the code below: (traceback is below that). Thanks!

When attempting to perform Object Detection using a pre-trained YoloV3 model, I am getting a ModuleNotFoundError. I'm attempting to detect a single class ("boat") from the 80 pre-trained classes available. My code is below (taken from the ImageAI documentation):

`from imageai.Detection import ObjectDetection

detector = ObjectDetection() detector.setModelTypeAsYOLOv3() detector.setModelPath("/content/gdrive/MyDrive/shipspotting_revision/yolov3.pt") detector.loadModel() custom = detector.CustomObjects(boat=True) detections = detector.detectCustomObjectsFromImage(custom_objects=custom, input_image="/content/gdrive/MyDrive/shipspotting_revision/rostock/output/livetest/test[1463].jpg", output_image_path="/content/gdrive/MyDrive/shipspotting_revision/rostock/output/live_test/test_new.jpg")

for eachObject in detections: print(eachObject["name"] , " : ", eachObject["percentage_probability"], " : ", eachObject["box_points"] ) `

The traceback for the error I'm getting is below:

ModuleNotFoundError Traceback (most recent call last) in 4 detector.setModelTypeAsYOLOv3() 5 detector.setModelPath("/content/gdrive/MyDrive/shipspotting_revision/yolov3.pt") ----> 6 detector.loadModel() 7 custom = detector.CustomObjects(boat=True) 8 detections = detector.detectCustomObjectsFromImage(custom_objects=custom, input_image="/content/gdrive/MyDrive/shipspotting_revision/rostock/output/livetest/test[1463].jpg", output_image_path="/content/gdrive/MyDrive/shipspotting_revision/rostock/output/live_test/test_new.jpg")

3 frames /usr/local/lib/python3.8/dist-packages/torch/serialization.py in find_class(self, mod_name, name) 1122 pass 1123 mod_name = load_module_mapping.get(mod_name, mod_name) -> 1124 return super().find_class(mod_name, name) 1125 1126 # Load the data (which may in turn use persistent_load to load tensors)

ModuleNotFoundError: No module named 'models'

xxxrohidh commented 10 months ago

self.save_detection(frame) AttributeError: 'Detection' object has no attribute 'save_detection'

i am getting this error ive been trying all the things but i couldnt find an solutionn

this is my code:

from PyQt5.QtCore import QThread, Qt, pyqtSignal
from PyQt5.QtGui import QImage
import cv2
import numpy as np
import time

class Detection(QThread):

    def __init__(self):
        super(Detection, self).__init__()   

    changePixmap = pyqtSignal(QImage)

    def run(self):
        self.running = True

        net = cv2.dnn.readNet("weights/yolov4.weights", "cfg/yolov4.cfg")
        classes = []

        with open("obj.names", "r") as f:
            classes = [line.strip() for line in f.readlines()]

        layer_names = net.getLayerNames()
        output_layers = [layer_names[i - 1] for i in net.getUnconnectedOutLayers()]
        colors = np.random.uniform(0, 255, size=(len(classes), 3))

        font = cv2.FONT_HERSHEY_PLAIN
        starting_time = time.time() - 11

        cap = cv2.VideoCapture(0)

        while self.running:
            ret, frame = cap.read()
            if ret:
                height, width, channels = frame.shape
                blob = cv2.dnn.blobFromImage(frame, 0.00392, (416, 416), (0, 0, 0), True, crop=False)
                net.setInput(blob)
                outs = net.forward(output_layers)
                class_ids = []
                confidences = []    
                boxes = []
                for out in outs:
                    for detection in out:
                        scores = detection[5:]
                        class_id = np.argmax(scores)
                        confidence = scores[class_id]

                        if confidence > 0.98:

                            # Calculating coordinates
                            center_x = int(detection[0] * width)
                            center_y = int(detection[1] * height)
                            w = int(detection[2] * width)
                            h = int(detection[3] * height)

                            # Rectangle coordinates
                            x = int(center_x - w / 2)
                            y = int(center_y - h / 2)

                            boxes.append([x, y, w, h])
                            confidences.append(float(confidence))
                            class_ids.append(class_id)

                indexes = cv2.dnn.NMSBoxes(boxes, confidences, 0.8, 0.3)
                for i in range(len(boxes)):
                    if i in indexes:
                        x, y, w, h = boxes[i]
                        label = str(classes[class_ids[i]])
                        confidence = confidences[i]
                        color = (256, 0, 0)
                        cv2.rectangle(frame, (x, y), (x + w, y + h), color, 2)
                        cv2.putText(frame, label + " {0:.1%}".format(confidence), (x, y - 20), font, 3, color, 3)
                        elapsed_time = starting_time - time.time()

                        if elapsed_time <= -10:
                            starting_time = time.time()
                            self.save_detection(frame)

                rgbImage = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)
                bytesPerLine = channels * width
                convertToQtFormat = QImage(rgbImage.data, width, height, bytesPerLine, QImage.Format_RGB888)
                p = convertToQtFormat.scaled(854, 480, Qt.KeepAspectRatio)
                self.changePixmap.emit(p)

def save_detection(self, frame):
        cv2.imwrite("saved_frame/frame.jpg", frame)
        print('Frame `Saved')`