robosavvy / vive_ros

ROS package for publishing HTV VIVE device locations.
BSD 3-Clause "New" or "Revised" License
98 stars 73 forks source link

button state #2

Closed nemanja-rakicevic closed 5 years ago

nemanja-rakicevic commented 8 years ago

Hi, I was wondering if you know how would it be possible to get the states of the trackpad and the grip button for example? Thanks

vmatos commented 8 years ago

Yes it is possible. Look at the sample: https://github.com/ChristophHaag/openvr/blob/master/samples/hellovr_opengl/hellovr_opengl_main.cpp

It's this piece here, on bool CMainApplication::HandleInput():

// Process SteamVR events
    vr::VREvent_t event;
    while( m_pHMD->PollNextEvent( &event, sizeof( event ) ) )
    {
        ProcessVREvent( event );
    }

    // Process SteamVR controller state
    for( vr::TrackedDeviceIndex_t unDevice = 0; unDevice < vr::k_unMaxTrackedDeviceCount; unDevice++ )
    {
        vr::VRControllerState_t state;
        if( m_pHMD->GetControllerState( unDevice, &state ) )
        {
            m_rbShowTrackedDevice[ unDevice ] = state.ulButtonPressed == 0;
        }
}
nemanja-rakicevic commented 8 years ago

Thanks, I managed to solve this. I don't know if this is the right place to ask, but do you know which functions or external library I should use to generate the 2 images on the HMD from a 3D environment obtained from a e.g. kinect point-cloud? If you know of any code which implements this it would be a great help.

vmatos commented 8 years ago

Do you want to visualize on the HMD the rendering from RVIZ? If yes, look at the example: https://github.com/ChristophHaag/openvr/blob/master/samples/hellovr_opengl and the oculus on RVIZ package: http://wiki.ros.org/oculus_rviz_plugins. From these two maybe you can figure out how to take the RVIZ rendering and display it on the HMD.

If you don't want to visualize RVIZ, just follow the example: https://github.com/ChristophHaag/openvr/blob/master/samples/hellovr_opengl , and maybe populate the environment with your points?

nemanja-rakicevic commented 8 years ago

So, I have a HTC Vive and a Kinect, kinect produces a sensor_msgs/PointCloud2 ROS message, and I'd like to integrate this with the hellovr_opengl example. The difference is that in the hellovr_opengl the cubes in the virtual environment are generated through OpenGL vertices and points. I am looking for some king of a bridge which would convert PointCloud information to something manageable by openvr maybe.