ValveSoftware / openvr

OpenVR SDK
http://steamvr.com
BSD 3-Clause "New" or "Revised" License
6.07k stars 1.28k forks source link

Simple tracker example code. #250

Open nemanja-rakicevic opened 8 years ago

nemanja-rakicevic commented 8 years ago

Hello, has anyone tried to make a simple .cpp code to extract and print the controller/hmd pose coordinates in the terminal (possibly in Linux)? I am having problems understanding the API functions needed to perform just this task. Thanks!

echuber2 commented 8 years ago

What parts are you confused about? The hellovr example has functions checking the orientations and manipulating the matrices.

nemanja-rakicevic commented 8 years ago

Exactly, but it's ~2000 lines of not very well documented code. I was wondering if there are some simpler examples, which don't necessarily include graphics as well, but just reading position&orientation and printing it. Maybe someone already made this, so I save time since I plan to forward this to robot's actuators and get a feed from Kinect to display on the HMD.

P.S. - Is there a good python wrapper maybe? I tried https://github.com/cmbruns/pyopenvr but it's not installing properly

echuber2 commented 8 years ago

Focus on the functions in hellovr that have "matrix" in the name. I'm interested in writing a more minimal example like you've described for accessibility purposes but I doubt I can find the time this week. The mathematics of the orientation matrices have already been discussed here and in other places (Steam forums, for example).

nemanja-rakicevic commented 8 years ago

Yes, I'm looking into it at the moment. If you manage to do something like this I would be extremely grateful (as well as many future beginners :)

I guess there shouldn't be any problems compiling the .cpp files on Linux, right?

profiler-bg commented 7 years ago

Hi, what exactly do you want to print out? Any example format you want to see? I might be able to do it today or tomorrow.

nemanja-rakicevic commented 7 years ago

Hello, I managed to find a ros package which prints the frame poses https://github.com/robosavvy/vive_ros and I managed to edit it a bit to publish the button states. However, I was not able to execute the vibration of the controllers.

Moreover, the most important point I'm missing now is how to render projections based on some 3D model.. the example does it only through openGL it seems. It would be very useful if I could find some documentation on rendering, or a well commented example.

echuber2 commented 7 years ago

What do you mean by rendering projections exactly? If not OpenGL, are you meaning to use software rendering? If you just need a primer on the math, you can refer to sites like this one. http://www.opengl-tutorial.org/beginners-tutorials/tutorial-3-matrices/

nemanja-rakicevic commented 7 years ago

No, the theory behind - the geometry/math part is fine. So based on the hellovr example, as I understand, to get the images you see on the HMD, you need to create a 3D object (in this case cubes using vertices etc, in openGL) and then project them based on the HMD's pose.

What I am looking for is a way to import a 3D model, or a 3D point cloud (from Kinect, SLAM map etc) which is updated regularly (e.g. kinect video feed) and project it directly to the HMD. Maybe as you said I would need additional software. (on Linux) Thanks!

echuber2 commented 7 years ago

The way it works is rather like a conventional game engine with a camera, only in this case there are two cameras (one for each eye) and the projection matrix (based on the viewing frustum) is basically set in stone and governed by the OpenVR stack for you, based on the HMD. The hellovr example shows how this is done; it also loads some objects with the CGLRenderModel class, although the usage is not so clear. Basically you might want to look into using a game engine with OpenVR for ease of use, if you're not comfortable.

If you want to use a seated configuration rather than the room-space features, you should look at the function ResetSeatedZeroPose() and then you can call GetDeviceToAbsoluteTrackingPose passing vr::TrackingUniverseSeated. (See the OpenVR documentation on GitHub.)

The main challenge in this is deciding on your frame of reference. In seated configuration, the user's head becomes the origin, and after the ResetSeatedZeroPose() establishes this center, the head is tracked through space from that initial position. So you can put something initially in front of the user's face by placing it at (0,0,-1), or so forth.