pageauc / speed-camera

A Unix, Windows, Raspberry Pi Object Speed Camera using python, opencv, video streaming, motion tracking. Includes a Standalone Web Server Interface, Image Search using opencv template match and a whiptail Admin Menu Interface Includes picam and webcam Plugins for motion track security camera configuration including rclone sync script. watch-app allows remotely controller camera configuration from a remote storage service name. Uses sqlite3 and gnuplot for reporting. Recently added openalpr license plate reader support.
Apache License 2.0
960 stars 169 forks source link

Working on implementing Vehicle Detection #135

Closed pageauc closed 9 months ago

pageauc commented 1 year ago

OK

Speed Camera calculates Object speed based on tracking largest moving object greater than minimal pixel area per config.py setting. Tracking logic tries to filter data to verify a good track.

A lot of users use speed camera for vehicle tracking and want better, more reliable speed accuracy. Accuracy can be affected by contours shifting relative to largest moving object as it is being tracked. On vehicles the tracking contours may just be part of the vehicle like a wheel well, fender, window, Etc. depending on lighting and other factors. This problem is more pronounced when moving object fills more of the camera view like camera too close to the road way or large truck, bus, Etc. problem is less severe when moving object takes up a smaller area in the camera image. If the object is a vehicle, object detection can be used to get a better fix on the object contour eg x position.

To implement vehicle Detection speed, all that is needed are saved grayscales variables of the object tracking start and end positions. If vehicle is detected in both grayscales, then get contours of each and calculate pixel distance moved abs(x_start - x_end). This can be done after a successful object track is completed to verify or correct a tracked vehicles speed. This won't affect the real time object tracking loop logic since it can be done after object tracking logic is complete but before results are saved.

Some problems to resolve.

What about parked vehicles? They wont move but will trigger vehicle detection and may cause errors when calculating speed. If parked vehicle is in foreground might be a problem. If in the background behind moving vehicle then less of a problem but could still affect contour position relative to moving vehicle. To resolve this issue I plan to use the object tracking loop start and end positions and match them to vehicle detection start and end positions. The respective object track contour should be within the bounds of the Object Detected contour. Position of each contour set would have to be within and/or close enough otherwise corrected speed calculation would be aborted with relevant logging. Otherwise the original object speed would be updated based on Vehicle Detection with appropriate logging.

Night, low light vehicle detection would most likely fail but object tracking from vehicle lights would still occur.

Note: I have avoided the issue of multiple object tracking due to RPI processing power. There is some Vehicle detection code that does that but is very slow for real time tracking on RPI. Code below also uses dlib that can be a pain installing on older RPI's with less than 1 GB RAM memory. I used RPI4 with 4GB Memory and still took a while for pip3 install

https://github.com/noorkhokhar99/vehicle-speed-detection-using-opencv-python

Still in early stage of implementing.
Comments, suggestions, Etc are welcome

Claude ....

bmsgaffer86 commented 10 months ago

I am certainly interested in the vehicle detection side of this. Is this something you have a guide on implementing yet?

pageauc commented 10 months ago

Tried tests using various vehicle Haar cascades. This was a total disaster. Lots of false positives and not very accurate. Have not had time to investigate AI solutions that are suitable for lower cpu powered Raspberry Pi's

jon-gith commented 10 months ago

why do you want real time processing? It is not required for this use case. I just have started with speed-cam and made a small adaptation to read from one file(I have no RTSP cam yet, hopefully will get it today) I would like to run speed-cam on my NAS/docker where I also want to store the video files and do offline calculation of the speed with speed-cam and then delete the files if no major speed violation was found. What are your thoughts? often, NAS have a lot of memory/precessing power. Why do you want to stick to low powered PIs? Seems not the right platform for this use case(and maybe AI)

pageauc commented 10 months ago

Real time is used for object tracking. I have been retired for 17 years and just coded speed camera as a fun demo project to track moving objects. Was designed originally for RPI and picamera because that is what I had and it did the job. I did not want to mess with any of my windows laptops or gaming PC's. Project expanded over time. I am looking at doing some AI at the moment. I have two large XigmaNAS servers running ZFS with a few special purpose jails but RPI's/Libre are my preferred low cost and portable computing platforms. If something gets screwed up I just burn a fresh SD and reattach my usb hard drive/ssd if needed. I can also mount nfs or samba share but I prefer simple solutions. Your mileage may vary.

BTW speed-cam config.py has a MO_MAX_SPEED_OVER variable to only record objects faster than specified speed. See variable comments in config.py.

You are more than welcome to develop your own solution or fork and modify my speed-camera project to meet what you see as your use case. If you do, I would be very pleased to hear about your solution.

Regards Claude ....

On Sat, Oct 21, 2023 at 3:48 AM jon-gith @.***> wrote:

why do you want real time processing? It is not required for this use case. I just have started with speed-cam and made a small adaptation to read from one file(I have no RTSP cam yet, hopefully will get it today) I would like to run speed-cam on my NAS/docker where I also want to store the video files and do offline calculation of the speed with speed-cam and then delete the files if no major speed violation was found. What are your thoughts? often, NAS have a lot of memory/precessing power. Why do you want to stick to low powered PIs? Seems not the right platform for this use case(and maybe AI)

— Reply to this email directly, view it on GitHub https://github.com/pageauc/speed-camera/issues/135#issuecomment-1773709206, or unsubscribe https://github.com/notifications/unsubscribe-auth/ABNPKZBYCAMIEW4PCVKYCG3YAN45LAVCNFSM6AAAAAA3G6MV4OVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTONZTG4YDSMRQGY . You are receiving this because you authored the thread.Message ID: @.***>

-- YouTube Channel at https://www.youtube.com/user/pageaucp http://www.youtube.com/user/pageaucp GitHub Repository at https://github.com/pageauc

jon-gith commented 10 months ago

Dear Claude, thanks a lot for your feedback:) I love your project and the dedication and skill you put into it. I understand the history, but maybe with AI & multiple object tracking it is not possible anymore to support RPI. I was reacting to your note: "I have avoided the issue of multiple object tracking due to RPI processing power. There is some Vehicle detection code that does that, but is very slow for real time tracking on RPI." I will keep you updated. Yesterday I started to look into the code and did the changes. I'm working on a windows 10 computer. Some things I haven't figured out yet: the GUI doesn't start, and the web server has some code that is not compatible with windows. You are much more experienced than me. I only posted my comment to get your thoughts on offline processing. In my opinion, it is also beneficial for testing. I was using an usb cam yesterday, but it always took a long time(30s? I have to check it again) to connect. Maybe with Linux it is faster? Another option might be google colab. They offer a lot of processing power for free. The video file can be stored on Google Drive and it seems possible to use the opencv GUI. So people only have to buy a camera and transfer the video files to google drive. That might be the best solution for most people. Best Regards from Germany Jörg

pageauc commented 10 months ago

Recently I added a few AI related features for development. Config.py settings are under the AI heading.

AI Settings

------------

IM_SAVE_4AI_ON = False # will save small colour image for later AI processing IM_SAVE_4AI_DAY_THRESH = 10 # Default = 10 Set mean px value to indicate day/night threshold. higher is day IM_SAVE_4AI_POS_DIR = "media/ai/pos" # Save positive ai images after tracking completed IM_SAVE_4AI_NEG_DIR = "media/ai/neg" # Save negative ai images (no motion detected) IM_SAVE_4AI_NEG_TIMER_SEC = 60 60 6 # Save a non positive image every specified seconds IM_FIRST_AND_LAST_ON = False # Save and process first and last tracking images (NOT Fully Implemented)

IM_SAVE_4AI settings saves positive and negative small images for use in training an AI model. A crude CSV file is also created. The THRESH senses light level and suppresses taking night time AI images. Speed Camera motion tracking can track car lights and speed but AI needs to see objects. This is mainly for me to train my own AI model using pytorch.

IM_FIRST_AND_LAST_ON turns on saving a copy of the first speed tracking image and the last. AI can be used on these and calculate the pixels travelled. The time is already known. The images are saved as filename_1 and _2 (no text overlay. along with the regular speed camera image. All this is still very much a work in progress.

My idea is to do AI on the first and last images and correct speed errors (can be done in a separate process and does not have to be done as part of speed-cam.py since data is stored to SQL database. Doing in my spare time. Coding is just one of my hobbies/activities

Regards Claude ...

On Sat, Oct 21, 2023 at 8:01 AM jon-gith @.***> wrote:

Dear Claude, thanks a lot for your feedback:) I love your project and the dedication and skill you put into it. I understand the history, but maybe with AI & multiple object tracking it is not possible anymore to support RPI. I was reacting to your note: "I have avoided the issue of multiple object tracking due to RPI processing power. There is some Vehicle detection code that does that, but is very slow for real time tracking on RPI." I will keep you updated. Yesterday I started to look into the code and did the changes. I'm working on a windows 10 computer. Some things I haven't figured out yet: the GUI doesn't start, and the web server has some code that is not compatible with windows. You are much more experienced than me. I only posted my comment to get your thoughts on offline processing. In my opinion, it is also beneficial for testing. I was using an usb cam yesterday, but it always took a long time(30s? I have to check it again) to connect. Maybe with Linux it is faster? Another option might be google colab. They offer a lot of processing power for free. The video file can be stored on Google Drive and it seems possible to use the opencv GUI. So people only have to buy a camera and transfer the video files to google drive. That might be the best solution for most people. Best Regards from Germany Jörg

— Reply to this email directly, view it on GitHub https://github.com/pageauc/speed-camera/issues/135#issuecomment-1773769352, or unsubscribe https://github.com/notifications/unsubscribe-auth/ABNPKZBJOY3YDVAP5QFLWQ3YAO2RDAVCNFSM6AAAAAA3G6MV4OVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTONZTG43DSMZVGI . You are receiving this because you authored the thread.Message ID: @.***>

-- YouTube Channel at https://www.youtube.com/user/pageaucp http://www.youtube.com/user/pageaucp GitHub Repository at https://github.com/pageauc

jon-gith commented 10 months ago

here are the few changes I made. config.py: I wanted to follow your naming convention and introduced a new CAMERA "filecam" and FILECAM_SRC = "video.mp4"

Camera Settings

---------------

CAMERA = "filecam" # valid values usbcam, rtspcam, pilibcam, pilegcam, filecam CAM_LOCATION = "Front Window" FILECAM_SRC = "video.mp4" USBCAM_SRC = 0 # Device number of USB connection usually 0, 1, 2, Etc

I created a new file "strmfilecam.py" based on "strmusbcam.py" with just one change in "name" class CamStream: def init(self, src=0, size=(320, 240), name="FileVideoStream"):

that's it .. I think. For me it also just a hobby. I didn't use python for a while. I'm just annoyed by the traffic and reckless car drivers and looking for a solution to measure the speed

jon-gith commented 10 months ago

I forgot the changes in strmcam.py line 14: CAMLIST = ('usbcam', 'rtspcam', 'pilibcam', 'pilegcam','filecam')

line 23: try: from config import (PLUGIN_ENABLE_ON, PLUGIN_NAME, CAMERA, IM_SIZE, FILECAM_SRC, RTSPCAM_SRC, USBCAM_SRC, IM_FRAMERATE, IM_ROTATION, IM_HFLIP, IM_VFLIP )

line 170: elif cam_name == 'usbcam' or cam_name == 'rtspcam' or cam_name == 'filecam': if cam_name == 'rtspcam': cam_src = RTSPCAM_SRC cam_title = cam_name.upper() + ' src=' + cam_src elif cam_name == 'usbcam': cam_src = USBCAM_SRC cam_title = cam_name.upper() + ' src=' + str(cam_src) elif cam_name == 'filecam': cam_src = FILECAM_SRC cam_title = cam_name.upper() + ' src=' + str(cam_src)

jon-gith commented 10 months ago

UPDATE: I have installed the camera (Reolink E1 Zoom) and the motion detection and automatic FTP video-file transfer to the NAS is working. Now i have a directory structure with a number of small video files for each day. The idea is that the python script is working on the video-files from the day before. All files with no speed violation should be deleted.

pageauc commented 9 months ago

Would be interestedly in seeing your script. Is it on Github? Is it doing vehicle detection

jon-gith commented 9 months ago

so far this is just what the camera offers(automatic file transfer to an ftp server on the NAS). I will continue to extend your script to do the rest. I guess next week I can upload something to github.

jon-gith commented 9 months ago

most surveillance cameras are capable of vehicle detection and ftp file transfer, I guess. You have to look at supported protocols in the camera description.

pageauc commented 9 months ago

OK now I understand. My Cameras have some of this capability. Claude ...

On Fri, Oct 27, 2023 at 4:05 AM jon-gith @.***> wrote:

most surveillance cameras are capable of vehicle detection and ftp file transfer, I guess. You have to look at supported protocols in the camera description.

— Reply to this email directly, view it on GitHub https://github.com/pageauc/speed-camera/issues/135#issuecomment-1782480814, or unsubscribe https://github.com/notifications/unsubscribe-auth/ABNPKZBB5TWHCAF7OTGA6ETYBNTN3AVCNFSM6AAAAAA3G6MV4OVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTOOBSGQ4DAOBRGQ . You are receiving this because you authored the thread.Message ID: @.***>

-- YouTube Channel at https://www.youtube.com/user/pageaucp http://www.youtube.com/user/pageaucp GitHub Repository at https://github.com/pageauc

jon-gith commented 9 months ago

I struggle to get the RTSP motion detection working for my situation: the camera is about 50 m away from the street, so only a small portion of the window is relevant for motion detection. To reduce the size of the stream/file I have blacked out the rest of the window(see examples in the attached link) I have uploaded the config file and screenshots https://github.com/jon-gith/test Will the motion detection work under these circumstances ? Thanks Jörg

pageauc commented 9 months ago

motion detection only uses the crop rectangle. No need to black out or change any of the full size image. Just size the crop rectangle to the required smallest size that works. It is best to set stream size as small as practical as well, especially with lower power RPI's. Default is 320x240 but 640x420 works fine too. Larger can work depending on RPI or computer power.

On Sun, Oct 29, 2023 at 5:37 AM jon-gith @.***> wrote:

I struggle to get the RTSP motion detection working for my situation: the camera is about 50 m away from the street, so only a small portion of the window is relevant for motion detection. To reduce the size of the stream/file I have blacked out the rest of the window(see examples in the attached link) I have uploaded the config file and screenshots https://github.com/jon-gith/test Will the motion detection work under these circumstances ? Thanks Jörg

— Reply to this email directly, view it on GitHub https://github.com/pageauc/speed-camera/issues/135#issuecomment-1784048183, or unsubscribe https://github.com/notifications/unsubscribe-auth/ABNPKZADL5GZXJZUQD5NVOLYBYIUBAVCNFSM6AAAAAA3G6MV4OVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTOOBUGA2DQMJYGM . You are receiving this because you authored the thread.Message ID: @.***>

-- YouTube Channel at https://www.youtube.com/user/pageaucp http://www.youtube.com/user/pageaucp GitHub Repository at https://github.com/pageauc

jon-gith commented 9 months ago

thanks :) I didn't notice the "# Motion Tracking Window Crop Area Settings" in the config file

jon-gith commented 9 months ago

I find it quite painful to understand and fine-tune the motion detection parameters, and it is still not working good enough. I will try the example you have provided with your initial post. https://github.com/noorkhokhar99/vehicle-speed-detection-using-opencv-python how do you like the approach? Is it working well? do you have a new version of speed-cam.py using this approach?

pageauc commented 9 months ago

This uses haar-cascade and is not very reliable. I suggest you try for yourself. Let me know your results. Claude ...

On Mon, Oct 30, 2023 at 8:38 AM jon-gith @.***> wrote:

I find it quite painful to understand and fine-tune the motion detection parameters, and it is still not working good enough. I will try the example you have provided with your initial post.

https://github.com/noorkhokhar99/vehicle-speed-detection-using-opencv-python how do you like the approach? Is it working well? do you have a new version of speed-cam.py using this approach?

— Reply to this email directly, view it on GitHub https://github.com/pageauc/speed-camera/issues/135#issuecomment-1785099771, or unsubscribe https://github.com/notifications/unsubscribe-auth/ABNPKZD6K6XC3RNVY37IDHTYB6NWBAVCNFSM6AAAAAA3G6MV4OVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTOOBVGA4TSNZXGE . You are receiving this because you authored the thread.Message ID: @.***>

-- YouTube Channel at https://www.youtube.com/user/pageaucp http://www.youtube.com/user/pageaucp GitHub Repository at https://github.com/pageauc

jon-gith commented 9 months ago

I have tried another haar-cascade example a few days ago, and it worked ok but slow(for faces) there is another example with cars that uses yolo(didn't work for me on google colab) but seems to be fast and best AI algorithm https://github.com/theAIGuysCode/colab-webcam

jon-gith commented 9 months ago

I was looking for "yolo pytorch vehicle detection" and found this example of a counting application(GUI included). I will try this one first. https://github.com/wsh122333/Multi-type_vehicles_flow_statistics

jon-gith commented 9 months ago

I got the above program working, just a view things had to be changed..but it is not working good for my situation, because it tries to track all the parking cars in the background. Maybe you have an idea to avoid that. The configuration/gui/installation is good but it takes a lot of time and diskspace. not so good for google colab because that has to be repeated every time a new colab session is started.

jon-gith commented 9 months ago

I will try this lightweight cascade example next https://github.com/ckyrkou/Car_Sideview_Detection/tree/master

pageauc commented 9 months ago

My idea was to track motion as usual. Record first and last images of track (basic feature already added to speed-cam.py logic).. Do vehicle recognition of start and end images and check if motion track contour is within the vehicle recognition contour. if both start and end motion contours are within respective vehicle contours then subtract start and end vehicle recognition x contours. This gives how many pixels were travelled and the travel time is known so speed can be calculated. I would compare the motion track speed and the vehicle recognition speed to see what error would be. vehicle recognition speed should be more accurate. Since motion is tracked, parked vehicles should not be a problem since motion tracking is still used.

Vehicle detection logic can be done offline or in a separate thread or even a different machine since it just needs two images (track start and track end) and travel time.

Claude .....

On Mon, Oct 30, 2023 at 2:43 PM jon-gith @.***> wrote:

I got the above program working, just a view things had to be changed..but it is not working good for my situation, because it tries to track all the parking cars in the background. Maybe you have an idea to avoid that. The configuration/gui/installation is good but it takes a lot of time and diskspace. not so good for google colab because that has to be repeated every time a new colab session is started.

— Reply to this email directly, view it on GitHub https://github.com/pageauc/speed-camera/issues/135#issuecomment-1785834676, or unsubscribe https://github.com/notifications/unsubscribe-auth/ABNPKZB3CNSPF5VTRVW2V3LYB7YMBAVCNFSM6AAAAAA3G6MV4OVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTOOBVHAZTINRXGY . You are receiving this because you authored the thread.Message ID: @.***>

-- YouTube Channel at https://www.youtube.com/user/pageaucp http://www.youtube.com/user/pageaucp GitHub Repository at https://github.com/pageauc

jon-gith commented 9 months ago

I guess you do you want to use an AI based approach , like haar-cascade ? did you ever use opencv_traincascade on your data?

pageauc commented 9 months ago

Working on creating my own ai model using my own images. Using cvat to annotate images and PyTorch cuda to create AI model. Annotation takes quite a bit of time for a few thousand images but doing it as a learning experience.

Cvat is on a VMware Ubuntu 64 virtual machine running under a docket web interface . Works pretty well once you get familiar with interface and keyboard shortcuts

Had to install Cuda on one of my windows machines since it won’t work in a VMware VM. Got PyTorch installed and talking to cuda but that is as far as I can go until I get image annotation data. Might take a while so might do a small sample test

Claude

On Tue, Oct 31, 2023 at 6:26 AM jon-gith @.***> wrote:

I guess you do you want to use an AI based approach , like haar-cascade ? did you ever use opencv_traincascade on your data?

— Reply to this email directly, view it on GitHub https://github.com/pageauc/speed-camera/issues/135#issuecomment-1787026614, or unsubscribe https://github.com/notifications/unsubscribe-auth/ABNPKZD4JN74AOAULSWNJPTYCDN4ZAVCNFSM6AAAAAA3G6MV4OVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTOOBXGAZDMNRRGQ . You are receiving this because you authored the thread.Message ID: @.***>

jon-gith commented 9 months ago

wow, you are doing advanced stuff. I don't have a PC with NVIDIA GPU and without it more advanced AI e.g. Yolo like in this example https://github.com/wsh122333/Multi-type_vehicles_flow_statistics is down to 1 fps..and not really usable. I will continue with your solution. It seems best suited for normal PCs/NAS

jon-gith commented 9 months ago

I'm new to motion detection and object tracking.. I hope you don't mind when I ask a few questions about that subject: it seems that you didn't use any of the opencv built in object tracking algorithms(e.g. MOOSE) . Why ?

jon-gith commented 9 months ago

Another question, wouldn't it be better to restrict motion detection to 2 smaller areas left and right? That might increase performance and unnecessary tracking of vehicle leaving parking area or coming from a side road

johncblacker commented 9 months ago

Noting a number of people who are attempting detection on the rpi and are having issues with performance. One solution is to use the Google Coral TPU accelerator. I've found it works well and speeds up detection. Also, a possible solution for inaccuracy is to implement and understand the use of "centroid" tracking. Adrian Rosebrock has a good example on the pyimagesearch website. His solution, though, uses the Intel NCS 2 which is an expensive solution and isn't going to be supported for long. Converting Adrian's solution to use the google coral tpu accelerator isn't difficult and is a cheaper solution; besides you don't have to install OpenVino which can be a pain.

jon-gith commented 9 months ago

I'm new to motion detection and object tracking.. I hope you don't mind when I ask a few questions about that subject: it seems that you didn't use any of the opencv built in object tracking algorithms(e.g. MOOSE) . Why ?

I found the reason..it is not usable: https://github.com/opencv/opencv_contrib/issues/2377