sergiomsilva / alpr-unconstrained

License Plate Detection and Recognition in Unconstrained Scenarios
Other
1.71k stars 607 forks source link

How to recognize number plate on a Video #89

Open fadi212 opened 5 years ago

fadi212 commented 5 years ago

Hey Sergio, I have gone through the project and it works pretty amazing for images. Can you guide me on how to do this for a video. Like do we have to write four separate scripts for Vehicle detection, Number Plate detection and recognition and so on or is there another way. Please guide.

maryamanwer commented 5 years ago

Hi, I also want how we detect from video..

fadi212 commented 5 years ago

I got my head around a bit and it seems like this model does not support Video inference as of now. May be in the future they will add that.

ggsggs commented 5 years ago
  1. Google how to extract frames(~image) from a video, it can be done in ~10 lines with OpenCV.
  2. Put all frames in your test folder
  3. Run test
fadi212 commented 5 years ago

Yes thats the way. I have applied that. It works. But can you please add the script which performs inference on a video in real time ?

adithya-tp commented 4 years ago

Good Day! I hacked together a solution for real time inference. You can make a new python file in the alpr-unconstrained directory. For Video inference, simply replace the parameter of cv2.VideoCapture() with the path to your video. Simply run the code using python 2. Do let me know if you run into problems. Enjoy!

import sys, os
import keras
import cv2
import traceback
import numpy as np
import darknet.python.darknet as dn
from os.path import splitext, basename
from glob import glob
from darknet.python.darknet import detect
from src.label import dknet_label_conversion
from src.utils import nms
from src.keras_utils import load_model
from glob import glob
from os.path import splitext, basename
from src.utils import im2single
from src.keras_utils import load_model, detect_lp
from src.label import Shape, writeShapes

def adjust_pts(pts,lroi):
    return pts*lroi.wh().reshape((2,1)) + lroi.tl().reshape((2,1))

if __name__ == '__main__':

    try:
            cap = cv2.VideoCapture(0)
            output_dir = 'lp_images/'

            lp_threshold = .5
            wpod_net_path = 'data/lp-detector/wpod-net_update1.h5'
            wpod_net = load_model(wpod_net_path)
            ocr_threshold = .4
            ocr_weights = 'data/ocr/ocr-net.weights'
            ocr_netcfg  = 'data/ocr/ocr-net.cfg'
            ocr_dataset = 'data/ocr/ocr-net.data'
            ocr_net  = dn.load_net(ocr_netcfg, ocr_weights, 0)
            ocr_meta = dn.load_meta(ocr_dataset)

            while(cap.isOpened()):
                    ret, frame = cap.read()
                    w = frame.shape[0]
                    h = frame.shape[1]
                    ratio = float(max(frame.shape[:2]))/min(frame.shape[:2])
                    side  = int(ratio*288.)
                    bound_dim = min(side + (side%(2**4)),608)

                    Llp,LlpImgs,_ = detect_lp(wpod_net,im2single(frame),bound_dim,2**4,(240,80),lp_threshold)
                    cv2.imshow('detected_plate', frame)
                    if len(LlpImgs):
                            Ilp = LlpImgs[0]
                            s = Shape(Llp[0].pts)
                            for shape in [s]:
                                ptsarray = shape.pts.flatten()
                                try:
                                    frame = cv2.rectangle(frame,(int(ptsarray[0]*h), int(ptsarray[5]*w)),(int(ptsarray[1]*h),int(ptsarray[6]*w)),(0,255,0),3)
                                    cv2.imshow('detected_plate', frame)
                                except:
                                    traceback.print_exc()
                                    sys.exit(1)
                            cv2.imwrite('%s/_lp.png' % (output_dir),Ilp*255.)
                            cv2.imshow('lp_bic', Ilp)
                            R,(width,height) = detect(ocr_net, ocr_meta, 'lp_images/_lp.png' ,thresh=ocr_threshold, nms=None)
                            if len(R):

                                    L = dknet_label_conversion(R,width,height)
                                    L = nms(L,.45)

                                    L.sort(key=lambda x: x.tl()[0])
                                    lp_str = ''.join([chr(l.cl()) for l in L])
                                    print("License Plate Detected: ", lp_str)
                    if cv2.waitKey(5) & 0xFF == ord('q'):
                        break
            cap.release()
            cv2.destroyAllWindows()
    except:
        traceback.print_exc()
        sys.exit(1)
    sys.exit(0)
naveenchepuri commented 4 years ago

Hello Aditya,

Thanks for the work. I am running this code on Google Colab but, I am getting following error.

Cannot load image "lp_images/_lp.png" STB Reason: can't fopen

Even Colab won't support calling windows. Can you please help me with reading video & saving it to colab.

Thanks in advance.

adithya-tp commented 4 years ago

Hello Aditya,

Thanks for the work. I am running this code on Google Colab but, I am getting following error.

Cannot load image "lp_images/_lp.png" STB Reason: can't fopen

Even Colab won't support calling windows. Can you please help me with reading video & saving it to colab.

Thanks in advance.

Hey there! The path issue can probably be solved by changing the value of the "output_dir" variable in the beginning of the try block. However, I don't think you'll be able to see the live video in a window on colab. You could read in a video by changing to cv2.VideoCapture(video_path), and comment out the cv2.imshow() statements, since colab doesn't support it.

I would suggest running it locally even if you've only got a cpu. A work around for the lack of processing power would be to process every 5th or 10th frame. You could do this by keeping a counter variable x = 0, and put a condition before passing it to the plate detection bit.

if x % 5 == 0:

Llp,LlpImgs,_ = detect_lp(wpod_net,im2single(frame),bound_dim,2**4,(240,80),lp_threshold)
...
...
naveenchepuri commented 4 years ago

Hi Aditya,

Thanks for the help. It worked :).

I identified one issue with the video logic is, this code picking any text visible in the video as it directly looking for the number plate & not looking for the vehicle.

Let me know if you get a chance to add the code to detect the vehicle & then number plate. I will also try to add the logic.

Thanks,

yashwant43 commented 4 years ago

Hi Aditya,

Thanks for the help. It worked :).

I identified one issue with the video logic is, this code picking any text visible in the video as it directly looking for the number plate & not looking for the vehicle.

Let me know if you get a chance to add the code to detect the vehicle & then number plate. I will also try to add the logic.

Thanks,

same problem

kid-pc-chen commented 4 years ago

Just modified darknet/python/darknet.py then you can input an Opencv image to darknet detect function.

Setp 1. add 'def array_to_image(arr)' as follow:

def array_to_image(arr):
    # need to return old values to avoid python freeing memory
    arr = arr.transpose(2, 0, 1)
    c, h, w = arr.shape[0:3]
    arr = np.ascontiguousarray(arr.flat, dtype=np.float32) / 255.0
    data = arr.ctypes.data_as(POINTER(c_float))
    im = IMAGE(w, h, c, data)
    return im, arr

Step 2. change original 'def detect(net, meta, image, thresh=.5, hier_thresh=.5, nms=.45)' as follow:

def detect(net, meta, image, thresh=.5, hier_thresh=.5, nms=.45):
    if isinstance(image, bytes):
        # image is a filename
        # i.e. image = b'/darknet/data/dog.jpg'
        im = load_image(image, 0, 0)
    else:
        # image is an nparray
        # i.e. image = cv2.imread('/darknet/data/dog.jpg')
        im, image = array_to_image(image)
        rgbgr_image(im)
    num = c_int(0)
    pnum = pointer(num)
    predict_image(net, im)
    dets = get_network_boxes(net, im.w, im.h, thresh,
                             hier_thresh, None, 0, pnum)
    num = pnum[0]
    if nms:
        do_nms_obj(dets, num, meta.classes, nms)

    res = []
    for j in range(num):
        a = dets[j].prob[0:meta.classes]
        if any(a):
            ai = np.array(a).nonzero()[0]
            for i in ai:
                b = dets[j].bbox
                res.append((meta.names[i], dets[j].prob[i],
                            (b.x, b.y, b.w, b.h)))

    res = sorted(res, key=lambda x: -x[1])
    wh = (im.w, im.h)
    if isinstance(image, bytes):
        free_image(im)
    free_detections(dets, num)
    return res, wh
saswat0 commented 4 years ago

This worked for me:

https://github.com/saswat0/License-Plate-Recognition/blob/master/alpr/video.py

Fahad-Alsabr commented 3 years ago

Good Day! I hacked together a solution for real time inference. You can make a new python file in the alpr-unconstrained directory. For Video inference, simply replace the parameter of cv2.VideoCapture() with the path to your video. Simply run the code using python 2. Do let me know if you run into problems. Enjoy!

import sys, os
import keras
import cv2
import traceback
import numpy as np
import darknet.python.darknet as dn
from os.path import splitext, basename
from glob import glob
from darknet.python.darknet import detect
from src.label import dknet_label_conversion
from src.utils import nms
from src.keras_utils import load_model
from glob import glob
from os.path import splitext, basename
from src.utils import im2single
from src.keras_utils import load_model, detect_lp
from src.label import Shape, writeShapes

def adjust_pts(pts,lroi):
    return pts*lroi.wh().reshape((2,1)) + lroi.tl().reshape((2,1))

if __name__ == '__main__':

    try:
            cap = cv2.VideoCapture(0)
            output_dir = 'lp_images/'

            lp_threshold = .5
            wpod_net_path = 'data/lp-detector/wpod-net_update1.h5'
            wpod_net = load_model(wpod_net_path)
            ocr_threshold = .4
            ocr_weights = 'data/ocr/ocr-net.weights'
            ocr_netcfg  = 'data/ocr/ocr-net.cfg'
            ocr_dataset = 'data/ocr/ocr-net.data'
            ocr_net  = dn.load_net(ocr_netcfg, ocr_weights, 0)
            ocr_meta = dn.load_meta(ocr_dataset)

            while(cap.isOpened()):
                    ret, frame = cap.read()
                    w = frame.shape[0]
                    h = frame.shape[1]
                    ratio = float(max(frame.shape[:2]))/min(frame.shape[:2])
                    side  = int(ratio*288.)
                    bound_dim = min(side + (side%(2**4)),608)

                    Llp,LlpImgs,_ = detect_lp(wpod_net,im2single(frame),bound_dim,2**4,(240,80),lp_threshold)
                    cv2.imshow('detected_plate', frame)
                    if len(LlpImgs):
                            Ilp = LlpImgs[0]
                            s = Shape(Llp[0].pts)
                            for shape in [s]:
                                ptsarray = shape.pts.flatten()
                                try:
                                    frame = cv2.rectangle(frame,(int(ptsarray[0]*h), int(ptsarray[5]*w)),(int(ptsarray[1]*h),int(ptsarray[6]*w)),(0,255,0),3)
                                    cv2.imshow('detected_plate', frame)
                                except:
                                    traceback.print_exc()
                                    sys.exit(1)
                            cv2.imwrite('%s/_lp.png' % (output_dir),Ilp*255.)
                            cv2.imshow('lp_bic', Ilp)
                            R,(width,height) = detect(ocr_net, ocr_meta, 'lp_images/_lp.png' ,thresh=ocr_threshold, nms=None)
                            if len(R):

                                    L = dknet_label_conversion(R,width,height)
                                    L = nms(L,.45)

                                    L.sort(key=lambda x: x.tl()[0])
                                    lp_str = ''.join([chr(l.cl()) for l in L])
                                    print("License Plate Detected: ", lp_str)
                    if cv2.waitKey(5) & 0xFF == ord('q'):
                        break
            cap.release()
            cv2.destroyAllWindows()
    except:
        traceback.print_exc()
        sys.exit(1)
    sys.exit(0)

i tried the code on my local device and it worked but i don't think it worked properly this is my results `mask_scale: Using default '1.000000' Loading weights from data/ocr/ocr-net.weights...Done!

(detected_plate:579): Gtk-WARNING **: cannot open display:`

the video that i've used it is https://www.youtube.com/watch?v=hv94fk7ldS8&ab_channel=AutoExpress i don't know why it shows me only one plate and it shows some of it only

Fahad-Alsabr commented 3 years ago

This worked for me:

https://github.com/saswat0/License-Plate-Recognition/blob/master/alpr/video.py

this one worked for me too , i downloaded the repo and i created two directories named test_input" and "test_output" inside the alpr file