Closed LorenzBung closed 4 years ago
I think that your proposed solution looks good. @mpowelson contributed the onReceivedPointCloud
function so he can probably provide more insight here.
It looks good to me. I imagine we did this because we were always converting a point cloud that came from a depth image back to a depth image. So the points were just barely outside the limit due to some roundoff. But throwing them away is safer. I would think that negative could happen if the camera matrix is slightly wrong (or maybe if you are incorporating point clouds from another camera using this virtual camera). I'd say we could throw those out too.
I thought about that, maybe it'd be helpful to throw a warning if we throw out points with negative values. The user might be unaware that his camera matrix is not correct.
That sounds fine to me.
@LorenzBung Let us know when this is ready to merge
@schornakj Should be ready, if nobody has any complaints?
@schornakj I see you thumbs upped this. I do not have merge rights, so you'll have to merge this.
Sorry, thought I'd already merged it!
Instead of shifting the points outside the visible area to the borders, shouldn't we ignore those points and continue in the loop? Also, is it possible that
pixel_pos_x
andpixel_pos_y
get negative values (e.g. wrong camera matrix)? If so, maybe catching this error and notifying the user is useful.I'm sorry if I just misunderstood the code.