strands-project / strands_perception_people

long-term detection, tracking and recognition of people
97 stars 70 forks source link

Adapt Node to custom depth image #176

Open filipetrocadoferreira opened 9 years ago

filipetrocadoferreira commented 9 years ago

I'm producing a depth image from stereo cameras and publishing it as 16uint with distance in mm.

What kind of data is this node expecting? 32F in meters right?

When I try to convert it with: matrix_depth(c, r) = ((double)(imgdepth.at(r,c)))/1000.0;

I usually get this error : [upper_body_detector-3] process has died [pid 7837, exit code -11, cmd /home/followinspiration/catkin_ws/devel/lib/rwth_upper_body_detector/upper_body_detector name:=upper_body_detector log:=/home/followinspiration/.ros/log/37c20038-6c48-11e5-9658-08606edac62c/upper_body_detector-3.log].

lucasb-eyer commented 9 years ago

Yes, 32FC1 in meters.

Is your switching r,c to c,r intentional?

filipetrocadoferreira commented 9 years ago

I'm just using the same approach as the original:

matrix_depth(c, r) = imgdepth.at(r,c);

lucasb-eyer commented 9 years ago

I guess by "the original" you mean this loop? As you can see, the image dimensions were hard-coded there. Is your stereo image also 640x480? If not, did you adapt the loop conditions?

Anyways, if you can compile in debug mode and run in GDB, that would give a backtrace of the crash and we could do more. As-is, I can't reproduce, so I can't help more than that.

filipetrocadoferreira commented 9 years ago

Yes! It's that loop. I did adapt the loop conditions to info->width and info->height.

I will try to run the code in debug mode and give you more info. Thanks!

filipetrocadoferreira commented 9 years ago

So I checked Debug:

upper_body_detector: /home/followinspiration/catkin_ws/src/spencer_people_tracking/detection/rgbd_detectors/rwth_upper_body_detector/include/Matrix.h:307: T& Matrix::operator()(int, int) [with T = double]: Assertion `x < xsize && y < ysize && x >= 0 && y >= 0' failed.

I guess my depth or camera information are not well suited to this node. Or I can be the freespace parameters...

cdondrup commented 9 years ago

Something else we noticed while trying to get the UBD running on a different robot, if the ground plane is not correct, it dies without any intelligible output. If you use the ground plane estimation, this shouldn't be a problem. If the fixed one is used, it has to be transformed according to the tilt angle of the camera. Not sure if this is related but it caused the UBD to die in a matter of seconds as soon as the camera was tilted. I think this is also related to #156

lucasb-eyer commented 9 years ago

I think the assertion is pretty clear in that there's out-of-bounds access to the matrix somewhere. Now we just need a backtrace to see where that happens.

Also, this may or may not be the cause of the crash.

filipetrocadoferreira commented 9 years ago

So I've tried to change the values of the config file .inp.

Something like :

freespace_minX = -9; freespace_minZ = -3; freespace_maxX = 9; freespace_maxZ = 9; freespace_threshold = 500 freespace_max_depth_to_cons = 7

With this the program does not crash but also does not produce any detection. I still need to verify my depth map.

I'm using really_fixed ground plane with 1.4 m of the floor.

lucasb-eyer commented 9 years ago

huh, really_fixed GP is something I wrote for the SPENCER project, I don't think it exists here in the STRANDS repos.

I still think investigating above assertion failure, or attaching gdb and getting a back-trace of the crash, are the only ways forward.

filipetrocadoferreira commented 9 years ago

Oh yeah, sorry about that.

Is there any visual feedback about the projection of point cloud on the occupation map?

filipetrocadoferreira commented 9 years ago

Is there any option to manually find the fixed normals and distance to the floor? Because the ground plane estimation is not working properly with my depth map produced from a stereo rig.

lucasb-eyer commented 9 years ago

I don't know the answer to both of these, sorry.

Pandoro commented 9 years ago

As far as I remember, the used VO is fovis. This is designed for rgbd cameras. It might well be the VO will not work with stereo images. On Oct 7, 2015 14:54, "Lucas Beyer" notifications@github.com wrote:

I don't know the answer to both of these, sorry.

— Reply to this email directly or view it on GitHub https://github.com/strands-project/strands_perception_people/issues/176#issuecomment-146188478 .

filipetrocadoferreira commented 9 years ago

Sorry, VO stands for what?

@lucasb-eyer I will try to examinate the output of occupation map. I feel that will be there the key to this problem. Actually the projection on floor coordinates is critical in this procedure and not so straight forward to some configurations.

Pandoro commented 9 years ago

VO strands for Visual Odometry, but after thinking about it again, I might have mixed two things. But as far as I remember the VO is used to track the ground plane over time instead of estimating it in every frame. If this is the case, it could be a source of wrong ground planes, otherwise ignore my comment.

On Wed, Oct 7, 2015 at 3:49 PM, filipetrocadoferreira < notifications@github.com> wrote:

Sorry, VO stands for what?

@lucasb-eyer https://github.com/lucasb-eyer I will try to examinate the output of occupation map. I feel that will be there the key to this problem. Actually the projection on floor coordinates is critical in this procedure and not so straight forward to some configurations.

— Reply to this email directly or view it on GitHub https://github.com/strands-project/strands_perception_people/issues/176#issuecomment-146201413 .

filipetrocadoferreira commented 9 years ago

Oh I got it. In my case the pointcloud produced from stereo does not have a good represantation of the floor plane, (it does not appear in almost all the frames)

filipetrocadoferreira commented 9 years ago

So I achieve to start getting detections (still not with the desire accuracy but good results) .

Important things to change while using depth from disparity: -Take attention to Camera Intrinsics -Set freespace_threshold to a lower number, in my case: 150 . Because depth maps from disparity usually have less points than rgbd cameras. -Set evaluation_threshold to have a desire relation between false positives/false negatives

Pandoro commented 9 years ago

A correctly estimated ground plane is crucial to the approach. If you cannot guarantee that, there is no real point in trying to us it. Either you can give a fixed groundplane, based on a robot setup for example, or use the estimation. But without it, it will not work correctly.

On Thu, Oct 8, 2015 at 4:40 PM, filipetrocadoferreira < notifications@github.com> wrote:

So I achieve to start getting detections (still not with the desire accuracy but good results) .

Important things to change while using depth from disparity: -Take attention to Camera Intrinsics -Set freespace_threshold to a lower number, in my case: 150 . Because depth maps from disparity usually have less points than of those from rgbd cameras. -Set evaluation_threshold to have a desire relation between false positives/false negatives

— Reply to this email directly or view it on GitHub https://github.com/strands-project/strands_perception_people/issues/176#issuecomment-146566532 .