Closed aguadopd closed 8 years ago
We also use it on a mobile platform 1.3 meters tall and it works very well. You saw many times OpenPTrack being used with camera placed quite high in order to minimize occlusions.
Hello, Pablo~ I also took experiments at home placing the Kinect on about 0.5m height. And it's easy to lose the track when the camera can't see people's head. Did you meet the similar problems as I said?
Regards,Brad Lucas. phD Student @Harbin Engineering University.Software Engineer & Computer Vision Engineer @XiRobot Co., Ltd
Date: Sun, 3 Apr 2016 13:32:45 -0700 From: notifications@github.com To: open_ptrack@noreply.github.com Subject: [OpenPTrack/open_ptrack] [QUESTION] What are the limits of camera height? (#89)
Hello! Thanks for all this work, got through here from the Human Tracker package for ROS Industrial.
I wanted to know if you or anyone has tried to use OpenPTrack using cameras placed at low height - say 1 m, or eyes-height. All of the images and videos I could find (except this ) show arrangements with the cameras high on the roof or corners.
The idea behind my question is to use OpenPTrack in a mobile robot like a Turtlebot.
― You are receiving this because you are subscribed to this thread. Reply to this email directly or view it on GitHub
Thank you Matteo, Brad.
My tests were maybe at 1.2 m, with the Turtlebot on a table. It was doing fine, but the node crashed often - we think it was because of the existing ROS installation, but did no debugging, so we have to test again in the close future. The Turtlebot has the Kinect at 30cm from the floor, so a full body is available when the person is far, maybe 3 meters. We had the same problems, Brad, but have to do more tests yet.
Thanks again for your comments, we will try again, probably with the camera a little bit higher. I close the issue.
it's my understanding of the algorithm that it needs to at least see a torso and head from at least one of the cameras in the network.
You're welcome Pablo, I also have a question that when I took experiment these days, I just walk around the camera, and my recognized id was changing all the time, 1, 2, 3, ... If I want to track myself, what's the information can I use? If I use the ID recognized, it will not be robust because of the changing of the ids.
Regards,Brad Lucas. phD Student @Harbin Engineering University.Software Engineer & Computer Vision Engineer @XiRobot Co., Ltd
I just walk around the camera, and my recognized id was changing all the time, 1, 2, 3, ...
'around' meaning, physically - 360 degree orbit around camera - walking out of the camera's line of sight, and then back in view ?
If so, the ID will indeed change, as you walked outside the work envelope for 'too long', and are re-entering 'far' from where you exited.
(however, If you left momentarily, and re-enter where you left, there is a fair chance you will retain your ID due to RGB hinting and track lifecycle).
OPT was designed to track people very well in large spaces with N >= 1 cameras. To maintain ID persistence, add cameras to reduce invisibility or occlusion. If you're seen, you're tracked, and the IDs are remarkably persistent. (The system cannot be expected to track you when you're not seen by a camera :)
There is RGB hinting for exit/enter (momentary lapse of line of sight) - it can help - but OPT's design did not assume all cameras have RGB. Thus this is a 'nice to have', and it is used w/ Kinect 1 tracking, but nothing else.
So, generally, if you're not seen, you're not tracked. And if you're not tracked, you will indeed be seen as a new entity when you re-enter - especially if you re-enter far away (spatiotemporally) from where you exit.
There are configuration parameters to modify these timeouts, RGB thresholds, and proximity, but I doubt they will help your 'walk off camera and re-enter far from exit' use case, as that's far from the design of OPT.
The point is here, the behavior you seem to be describing is expected. The current solution would be to add more cameras.
However, if you're saying the ID will change even when you remain visible to the camera, that can indeed be tuned. As long as you're visible from at least one camera, the expectation is you will retain your ID. see https://github.com/OpenPTrack/open_ptrack/wiki/Imager-Settings
Hi, Alex Nano~ Thanks very much for your detailed answer!!! It really helps a lot!! "around" means that I walked in the view of the camera (not out of the camera's view range). And because of the low resolution ratio of the Kinect (640 * 480), sometimes my head was not in the range of the camera's view (but my body was fully inside the view of the camera). Sometimes I need to squat to make my full body to be seen by kinect. I tuned the parameters in "$ROSHOME/open_ptrack/detection/conf/ground_based_people_detector*.yaml" and "haar_disp_ada_detector.yaml"And ground_based_people_detection_min_confidence really can change the sensitivity and accuracy when it's changed. But sometimes even when my whole body was inside the view of the camera (and have not left at all), the detection of my body can be lost, and when the detection came back, the recognized id had changed. There is also a hint that, the detection rectangle drawed on the video flow just covered half of my body, but not the 3d rectangle covering my whole body. One of my guess is that:I didn't do camera calibration before running the codes, does it hurt? Regards,Brad Lucas. phD Student @Harbin Engineering University.Software Engineer & Computer Vision Engineer @XiRobot Co., Ltd
I tuned the parameters in "$ROSHOME/open_ptrack/detection/conf/ground_based_people_detector*.yaml" and "haar_disp_ada_detector.yaml"And ground_based_people_detection_min_confidence really can change the sensitivity and accuracy when it's changed.
indeed - note you can tune these in realtime via:
rosrun rqt_reconfigure rqt_reconfigure
(while this is useful to explore, one still has to manually save the best settings in the config/text files for them to be persistent between sessions)
One of my guess is that:I didn't do camera calibration before running the codes, does it hurt?
There are two types of calibration - intrinsic (single camera) and extrinsic (multi-camera).
intrinsic is not necessary, but it can enable higher precision detection, especially when there is significant variance between devices of the same make - such as the Kinect v1. Not so much with Kinect v2 (but no, it can't hurt).
extrinsic is only relevant in multi-camera networks. .
Hello Alex~ Do you mean that OpenPTrack is purely using depth information to process tracking? I saw a parameter "use_rgb" which can be set to "true" or "false"; Does it mean that I can choose whether to use rgb image or not by tuning that parameter?
Regards,Brad Lucas. phD Student @Harbin Engineering University.Software Engineer & Computer Vision Engineer @XiRobot Co., Ltd
Actually, color information was used to learn person-specific classifiers in my previous work, but color is not used for tracking in OpenPTrack in order to minimize the size of the detection messages and because lighting could be highly varying (it also works in no light). Color images are just used for calibration and for people detection with Kinect v1 or color stereo.
Thanks, Alex~ I will open "use_rgb" to make sure nothing will hurt. Also, one of my experiment result is that: when I close the light, the tracking effect will be worse. I will try more later to find the reason.
Regards,Brad Lucas. phD Student @Harbin Engineering University.Software Engineer & Computer Vision Engineer @XiRobot Co., Ltd
Hello, Matteo~ Thanks for answering! "color information was used to learn person-specific classifiers in my previous work"Whether you still use rgb information to train person-specific classifiers?
Regards,Brad Lucas. phD Student @Harbin Engineering University.Software Engineer & Computer Vision Engineer @XiRobot Co., Ltd
Date: Wed, 6 Apr 2016 13:32:27 -0700 From: notifications@github.com To: open_ptrack@noreply.github.com CC: liufuqiang_robot@hotmail.com Subject: Re: [OpenPTrack/open_ptrack] [QUESTION] What are the limits of camera height? (#89)
Actually, color information was used to learn person-specific classifiers in my previous work, but color is not used for tracking in OpenPTrack in order to minimize the size of the detection messages and because lighting could be highly varying (it also works in no light).
Color images are just used for calibration and for people detection with Kinect v1 or color stereo.
― You are receiving this because you commented. Reply to this email directly or view it on GitHub
No, tracking in OpenPTrack does not do that because it would not be clear how to match classifiers trained on cameras with color with those trained on camera without color image, in case of networks composed also of depth/infrared-only cameras.
On Wed, Apr 6, 2016 at 5:01 PM -0700, "Brad Lucas" notifications@github.com wrote:
Hello, Matteo~ Thanks for answering! "color information was used to learn person-specific classifiers in my previous work"Whether you still use rgb information to train person-specific classifiers?
Regards,Brad Lucas. phD Student @Harbin Engineering University.Software Engineer & Computer Vision Engineer @XiRobot Co., Ltd
Date: Wed, 6 Apr 2016 13:32:27 -0700 From: notifications@github.com To: open_ptrack@noreply.github.com CC: liufuqiang_robot@hotmail.com Subject: Re: [OpenPTrack/open_ptrack] [QUESTION] What are the limits of camera height? (#89)
Actually, color information was used to learn person-specific classifiers in my previous work, but color is not used for tracking in OpenPTrack in order to minimize the size of the detection messages and because lighting could be highly varying (it also works in no light).
Color images are just used for calibration and for people detection with Kinect v1 or color stereo.
― You are receiving this because you commented. Reply to this email directly or view it on GitHub
You are receiving this because you commented. Reply to this email directly or view it on GitHub: https://github.com/OpenPTrack/open_ptrack/issues/89#issuecomment-206626591
Hello! Thanks for all this work, got through here from the Human Tracker package for ROS Industrial.
I wanted to know if you or anyone has tried to use OpenPTrack using cameras placed at low height - say 1 m, or eyes-height. All of the images and videos I could find (except this ) show arrangements with the cameras high on the roof or corners. Maybe all the detection stages are trained with images taken from high positions.
The idea behind my question is to use OpenPTrack in a mobile robot like a Turtlebot.