Closed tomybyte closed 2 years ago
Do you have verbose = True in the config.py ? If this is set you will get more detailed messages.
There are timers for each mode (motion, face and search) These are in the config.py. These are to help prevent constant switching between modes. If a face is found it stays in face mode for the specified seconds so that if the face lock is lost momentarily it will wait for a while to see if the face is found again. If it is not found for specified seconds it changes mode per logic.
timer_motion = 1 # seconds delay after no motion before looking for face
timer_face = 3 # seconds delay after no face found before starting pan search
timer_pan = 2 # seconds delay between pan search re positioning movements
Motion detection should be pretty fast in good lighting conditions, so it defaults to 1 second. I tested this on a RPI3 so it is faster. You may have to tune the timer settings. Also 2 seconds between pan tilt search may not be enough time to allow for motion or face detection so you might want to increase that timer. This will allow more time to lock on a face or find motion.
Make sure you have good lighting conditions when testing. Natural light would be best. If you have a monitor attached you can look at the opencv diff window to see what the camera is detecting for motion. set config.py variable diff_window_on = True
Also running from desktop and window_on = True will slow processing down quite a bit so you may want to try running with window_on = False.
I suspect the problem is either lighting conditions and/or timer settings. Try changing these first
Let me know how you make out with tuning.
Claude ...
Hi Claude,
thank You for Your feedback!
So I have added some more verbose and it seem that for c in contours: never run I have tested it in day light with a lot of waved hands ;)
Maybe you have another idea?
Sincerely
#-----------------------------------------------------------------------------------------------....
def motion_detect(gray_img_1, gray_img_2):
motion_found = False
biggest_area = MIN_AREA
# Process images to see if there is motion
differenceimage = cv2.absdiff(gray_img_1, gray_img_2)
differenceimage = cv2.blur(differenceimage,(BLUR_SIZE,BLUR_SIZE))
# Get threshold of difference image based on THRESHOLD_SENSITIVITY variable
retval, thresholdimage = cv2.threshold(differenceimage, THRESHOLD_SENSITIVITY, 255, cv2.THRESH_BINARY)
# Get all the contours found in the thresholdimage
try:
thresholdimage, contours, hierarchy = cv2.findContours( thresholdimage, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE )
except:
contours, hierarchy = cv2.findContours( thresholdimage, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE )
if contours != (): # Check if Motion Found
for c in contours:
if verbose:
print("motion_detect - find area")
found_area = cv2.contourArea(c) # Get area of current contour
if found_area > biggest_area: # Check if it has the biggest area
biggest_area = found_area # If bigger then update biggest_area
(mx, my, mw, mh) = cv2.boundingRect(c) # get motion contour data
motion_found = True
else:
if verbose:
print("motion_detect - found_area smaler than biggest_area")
if motion_found:
motion_center = (int(mx + mw/2), int(my + mh/2))
if verbose:
print("motion-detect - Found Motion at px cx,cy (%i, %i) Area w%i x h%i = %i sq px" % (int(mx + mw/2), int(my + mh/2), mw, mh, biggest_area))
else:
motion_center = ()
if verbose:
print("motion_detect - no motion found")
else:
motion_center = ()
if verbose:
print("motion_detect - no contours found")
return motion_center
root@tomobserver:~/face-track-demo# ./face-track.py
===================================
face_track.py ver 0.63 using python2 and OpenCV2
Loading Libraries Please Wait ....
Initializing Pi Camera ....
press ctrl-c to quit SSH or terminal session
Position pan/tilt to (90, 130)
pan_goto - Moved Camera to pan_cx=90 pan_cy=130
===================================
Start Tracking Motion and Faces....
motion_detect - no motion found
motion_detect - no motion found
motion_detect - no motion found
motion_detect - no motion found
motion_detect - no motion found
motion_detect - no motion found
motion_detect - no motion found
motion_detect - no motion found
motion_detect - no motion found
motion_detect - no motion found
motion_detect - no motion found
motion_detect - no motion found
motion_detect - no motion found
motion_detect - no motion found
motion_detect - no motion found
motion_detect - no motion found
motion_detect - no motion found
motion_detect - no motion found
motion_detect - no motion found
motion_detect - no motion found
motion_detect - no motion found
motion_detect - no motion found
motion_detect - no motion found
motion_detect - no motion found
motion_detect - no motion found
motion_detect - no motion found
motion_detect - no motion found
motion_detect - no motion found
motion_detect - no motion found
motion_detect - no motion found
motion_detect - no motion found
motion_detect - no motion found
motion_detect - no motion found
motion_detect - no motion found
motion_detect - no motion found
motion_detect - no motion found
motion_detect - no motion found
motion_detect - no motion found
motion_detect - no motion found
motion_detect - no motion found
motion_detect - no motion found
motion_detect - no motion found
face_detect - running
pan_search - at pan_cx=143 pan_cy=130
pan_goto - Moved Camera to pan_cx=143 pan_cy=130
motion_detect - no motion found
Did not notice that you are running Jessie Lite so you will not have a GUI desktop to view opencv windows to help debug. It is a little more difficult debugging opencv without being able to see the various opencv windows but the previous recommendations should help. You could always insert an opencv command to write an image to disk based on a timer or cycles or when a mode change happens. This would allow you to see the diff or threshold images to get an idea of what the camera is seeing.
Claude ...
Try changing the config.py MIN_AREA variable. It is set to 1000. You might want to try reducing that to maybe 100 to see if motion gets detected. Also have you changed the timer_pan variable. Try setting it to maybe 4 or 5 seconds. This is just a demo so I have not fine tuned it much. Let me know status.
Claude ...
Did not ask. Are you running this under python2 or python3 and opencv3 ?
face_track.py ver 0.63 using python2 and OpenCV2
So good idea to make images!
Booth images are totally black (320px * 240px) :O
def motion_detect(gray_img_1, gray_img_2):
cv2.imwrite("face-1" + ".jpg", gray_img_1)
cv2.imwrite("face-2" +".jpg", gray_img_2)
OK, tomorrow I'll try to make some OpenCV2 tests to get working images...
I am suspecting something is not working right in detecting contours. It may be an issue with Jessie Lite. Is it possible to install a full version of Jessie and try installing and running the face-track demo under the full version. This will also allow you to view the opencv windows to see what is happening. All you would need is a second SD card. I use etcher to burn SD images and it works well. I have had issues with Jessie Lite with other programs in the past.
Claude ....
We just got back from wintering in Texas so I had stuff packed away. I just dug out my pan/tilt RPI and did a fresh install of face-track.py. Everything worked perfectly. I suspect your problem may be due to Jessie Lite. I have not tested my setup with Jessie Lite. Also I am using RPI3 and python 2 but I have tested with python3 and opencv3 without issues. Let me know if your trouble shooting indicates something else. In the mean time my setup is happily tracking motion and face.
Claude ...
Test camera with raspistill to see if there is a problem with the camera. Might be a cable. I occasionally get a buffer error due to cable shaking with pan/tilt and momentarily causing a cable connection problem but this does not happen very often.
Let me know how you are doing.
Hi Claude,
thanks a lot for Your input!!! So my Camera (NoIR) works well and I use RPi-Cam-Web-Interface (based on motion) successful. raspistill -o cam.jpg works also well.
Therefore I'll try tomorrow some basic OpenCV2 tests to check why the images are black.
I keep you updated...
Sincerely
Thomas
I have one Noir RPI camera. I will have to find it and do a test with opencv.
On Fri, Apr 14, 2017 at 2:13 PM, tomybyte notifications@github.com wrote:
Hi Claude,
thanks a lot for Your input!!! So my Camera (NoIR) works well and I use RPi-Cam-Web-Interface (based on motion) successful. raspistill -o cam.jpg works also well.
Therefore I'll try tomorrow some basic OpenCV2 tests to check why the images are black.
I keep you updated...
Sincerely
Thomas
— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/pageauc/face-track-demo/issues/2#issuecomment-294205917, or mute the thread https://github.com/notifications/unsubscribe-auth/AFr1ZAZ29lO4cRHbdV9W2oWZq9Edmjefks5rv7dQgaJpZM4M9h-m .
-- See my YouTube Channel at http://www.youtube.com/user/pageaucp
Hi Claude,
I have found a smart way to use Your script with my existing RPi-Cam-Web-Interface... it produced a permanent output stream in /dev/shm/mjpeg/cam.jpg
So therefore I have just replaced the img_frame source and commented out the video stream init and voila... it works booth of them (web interface incl. Your face tracking)
#vs = PiVideoStream().start() # Initialize video stream
#vs.camera.rotation = CAMERA_ROTATION
#vs.camera.hflip = CAMERA_HFLIP
#vs.camera.vflip = CAMERA_VFLIP
#img_frame = vs.read()
img_frame = cv2.imread('/dev/shm/mjpeg/cam.jpg')
#img_frame = vs.read()
img_frame = cv2.imread('/dev/shm/mjpeg/cam.jpg')
I have 3 HC-SR501 Sensors (left,center,right) + pan/tilt Servos So my target now is to initial find the motion by one of the sensors (always scripted) and then to follow the motion by Your script and record it automatically by motion (RPi-Cam-Web-Interface)
I keep you updated (if You are interested in)...
Sincerely
Thomas
I would be very interested in seeing how your project progresses. Let me know your github repo and I will put a watch on it.
Thanks Claude ...
On Sat, Apr 15, 2017 at 7:08 AM, tomybyte notifications@github.com wrote:
Hi Claude,
I have found a smart way to use Your script with my existing RPi-Cam-Web-Interface... it produced a permanent output stream in /dev/shm/mjpeg/cam.jpg
So therefore I have just replaced the img_frame source and commented out the video stream init and voila... it works booth of them (web interface incl. Your face tracking)
vs = PiVideoStream().start() # Initialize video stream
vs.camera.rotation = CAMERA_ROTATION
vs.camera.hflip = CAMERA_HFLIP
vs.camera.vflip = CAMERA_VFLIP
#img_frame = vs.read() img_frame = cv2.imread('/dev/shm/mjpeg/cam.jpg') #img_frame = vs.read() img_frame = cv2.imread('/dev/shm/mjpeg/cam.jpg')
I have 3 HC-SR501 Sensors (left,center,right) + pan/tilt Servos So my target now is to initial find the motion by one of the sensors (always scripted) and then to follow the motion by Your script and record it automatically by motion (RPi-Cam-Web-Interface)
I keep you updated (if You are interested in)...
Sincerely
Thomas
— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/pageauc/face-track-demo/issues/2#issuecomment-294286664, or mute the thread https://github.com/notifications/unsubscribe-auth/AFr1ZB0ZYiHankxL02zlKQzNavkN-38gks5rwKVAgaJpZM4M9h-m .
-- See my YouTube Channel at http://www.youtube.com/user/pageaucp
Hi Claude,
thank You again for You inspiration!!! An here is my RPi-Cam-Web-Interface Plugin https://github.com/tomybyte/Motion-Pi-Pan
Sincerely
Thomas
Great Job. Now I am inspired by you and will order the HC-SR501 sensors.
Thanks for the mention and letting me know details of your project.
Regards Claude
On Sun, Apr 23, 2017 at 2:41 PM, tomybyte notifications@github.com wrote:
Hi Claude,
thank You again for You inspiration!!! An here is my RPi-Cam-Web-Interface Plugin https://github.com/tomybyte/ Motion-Pi-Pan
Sincerely
Thomas
— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/pageauc/face-track-demo/issues/2#issuecomment-296476252, or mute the thread https://github.com/notifications/unsubscribe-auth/AFr1ZMddwGt2YCAC4oWqTHJ6SPiNn-zdks5ry5tUgaJpZM4M9h-m .
-- See my YouTube Channel at http://www.youtube.com/user/pageaucp
Hi Claude,
thank You for Your software...
After successfully installation on Raspberry Pi 2B (raspbian-jessie-lite) the script working without any error message but also without any motion detection. The (pan/tilt) Servos working and the Camera LED is on.
Is there any possibility to have more debug output or do you have an idea what the problem might be?
Sincerely
PS: Also ./motion-track.py do not detect any motion