Closed sebandraos closed 9 years ago
Fantastic. Works like a charm in all the modes I tried. So the trick was in treating the points and colours row by row rather than individually?
The SDK provides a function to map color data onto the depth image. The output of this function as a color image where each pixel is supposed to correspond to a pixel in depth image. My original implementation just copied these color pixels into the output cloud, one after another, effectively treating the image as a flat array. Turns out, depending on the particular combination of resolutions used, the rows in the color image may be padded with additional bytes. And when you work with pitched memory as a flat array... stripes and banding is what you get.
Okay, I'll be merging this into master soon. Any other issues? The clouds are mirrored, aren't they? Should be unmirror them somehow?
Ok, cool. I would never have got there myself. I've been letting it run for the last couple of hours and it's insanely stable, I think it's probably ready to be merged. I think we should unmirror because, although it may be because I was messing with different ways of capturing creating modules, each RSSDK module needs to be mirrored individually so I had a rather fun moment earlier where my facial landmarks were mirrored compared to the point cloud. The mirroring, I assume, comes from ..._grabber.cpp, immediately after setting the StreamProfileSet, where an explicit mirror is applied to the device.
Fix is in master.
As for the line that you are referring, I accidentally commited it. I was experimenting with setting this mode on and off, but the results seemed to be the same. I'll need to do more tests though.
@sebandraos So that command indeed had no effect on the output, I got rid of it. Instead, I now manually mirror all points horizontally. Could you try the latest code and tell how it works for you? (Also, I changed default viewpoint in the viewer window, now it should be easier to make sense of the data.)
Hectic week last week, sorry. This has always been an "issue" for me, the viewport camera seems to start behind the pointcloud so things that are closer to the physical camera are further from the viewport camera. In this latest version this is still the case but now when I rotate to get the physical camera's POV things are mirrored in such a way that the camera works like a mirror i.e. text is back to front, my left hand is on the left hand side of the screen. I'm not sure if this is the intended behaviour but personally I prefer an accurate POV.
That's weird. I added un-mirroring because I was getting mirrored image by default. (I have tested it only with R200 though.) Also, it's strange that you are still getting the view from behind. Can you press 'c' in the visualizer window and check what camera parameters are output (in terminal)?
On launch my camera parameters are: Clipping plane [near,far] 1.89904, 3.37194 Focal point [x,y,z] 0.041238, -0.0154794, 0.4615 Position [x,y,z] 0.041238, -0.0154794, 3.00989 View up [x,y,z] 0, 1, 0 Camera view angle [degrees] 30 Window size [x,y] 768, 432 Window position [x,y] 0, 0 Those look right to me, i.e. the camera is in front Z+ looking back towards the origin, but I'm definitely getting a concave face from that point of view.
PS because I can't send private message in github anymore - I gave you a little shout out at a talk I gave at TU Wien recently (video just went online) https://goo.gl/photos/eoRm4yZeQhtjG2366
You are probably not using the latest code. I explicitly set parameters to [0, 0, 0]
, [0, 0, 1]
, [0, 1, 0]
, and this gives nice direct view onto the scene. Together with 45 degrees view it effectively makes the point cloud appear as a flat color image. Then when you start to move the mouse, you can perceive the depth.
Regarding the talk: when, where? Did not hear about it!
I was building (a possibly outdated version of) the fix-color branch, switched over to master and you're absolutely right, convex and definitely not mirrored. Super.
It was part of a conference called eCAADe (I forgot you were in Vienna, sorry, I'd have got in touch if I'd remembered). The presentation was about vocal control for real-time robotic collaboration and I was using PCL with the RS grabber to do real-time object recognition with the camera mounted on an industrial arm's end effector. It's only a short talk but if its of interest its up here
Summary (from issue #3 ): With the R200 camera there appears to be a banding of colours when working with colour and depth streams as demonstrated below. This issue seems to be intrinsically linked to the camera itself and does not exist when using dissimilarly sized streams on the the F200 camera. The colours represented seem correct (tested by changing the primary colour in front of the device) but the order appears to be wrong.