Open mattfelsen opened 7 years ago
hello!!
I haven't tried RoomAlive, although I can guess the general jazz of it from the videos i saw a while back.
how did you get Rulr to compile? did you start with one of the SDK downloads?
ofxRulr::Camera::beginAsCamera()
means 'look through the cameras view'
like, as if the Camera
object was an ofCamera
the view transform corresponds to the rigid body transform between the camera frame of reference and the world frame of reference the projection transform is the perspective transform only (incorporates field of view, lens offset and aspect ratio)
in opengl commonly the model and view matrices are stuffed together as modelview matrix personally i prefer to keep model and view separate (this is common in most scene graphs, and in DX)
the kinect's color<>depth is handled by the kinect sdk the fastest method is using the depth to color table and loading it into a shader in Rulr itself we just use the CPU method (which is still pretty fast)
in Rulr, the calibration is between kinect's WORLD coordinate and projector's PIXEL coordinate i.e. kinect<>world is handled by kinect SDK world<>projector pixels is handled by Rulr
happy to discuss more
Thanks! That seems to make sense more or less, but I guess the part that I'm still hung up on is that Rulr's calibration seems to include...
Whereas what RoomAlive spits out includes:
So I don't understand why Rulr's calibration is 2 transforms while RoomAlive's (projector info) is a 1 transform and a camera matrix? I would think that since both systems allow you to render your scene from the projector's point of view, they must use/need the same information to set up the view correctly? I'm interested in using RoomAlive's calibration output since it's a relatively straightforward way of calibrating multiple Kinects & multiple projectors, which I'm not sure if you can do in Rulr?
so all the kinect info is available from the SDK directly, i.e. Rulr doesn't try and calibrate this so we just need the 'projector info'
you can think of the view transform as being equivalent to the pose/transform, and the projection transform being equivalent to the camera matrix
Notes:
(generally the calibration produces the camera matrix, and then you convert this into a projection matrix for use in live graphics)
(again, the view matrix is what you actually need in a live rendering pipeline)
Hi again. Is it possible to calibrate multiple kinects and projectors in Rulr, as you can do in RoomAlive?
Related, can you shed some light (heh) on what is calculated as part of the calibration in Rulr? From looking at what gets used in
ofxRulr::Camera::beginAsCamera()
, it looks like there are 2 transformation matrices: view & projection. Does one of these correspond to the kinect's color ⟷ depth transform, and the other for depth ⟷ projector? Shouldn't there also be a camera matrix in there (for the projector's view, I think) as well? I'm looking at what's in the RoomAlive calibration and trying to make sense of it all, as it seems like each calibration should contain the same type of data, but it's not clear to me. Thanks!