Closed kellenberger closed 3 years ago
hello Kellen,
were you able to find the transformation between RigNode, World Coordinates and the Camera? i was able to get the extrinsics of the IMU with respect to the RigNode, however i dont know the transformation between the PV camera and the rigNode and currently do not have access to the device so i cant deploy my code which would give me the RigNode extrinsicsmatrix as seen below:
` // Get camera object
{
std::lock_guard
IResearchModeCameraSensor* pCameraSensor = nullptr;
HRESULT hr = m_pCameraSensor->QueryInterface(IID_PPV_ARGS(&pCameraSensor));
winrt::check_hresult(hr);
wchar_t outputExtrinsicsPath2[MAX_PATH] = {};
swprintf_s(outputExtrinsicsPath2, L"%s\\%s_pvcamextrinsics.txt", m_storageFolder.Path().data(), m_pCameraSensor->GetFriendlyName());
std::ofstream fileExtrinsics2(outputExtrinsicsPath2);
DirectX::XMFLOAT4X4 cam_extrinsics;
pCameraSensor->GetCameraExtrinsicsMatrix(&cam_extrinsics);
fileExtrinsics2 << cam_extrinsics.m[0][0] << "," << cam_extrinsics.m[1][0] << "," << cam_extrinsics.m[2][0] << "," << cam_extrinsics.m[3][0] << ","
<< cam_extrinsics.m[0][1] << "," << cam_extrinsics.m[1][1] << "," << cam_extrinsics.m[2][1] << "," << cam_extrinsics.m[3][1] << ","
<< cam_extrinsics.m[0][2] << "," << cam_extrinsics.m[1][2] << "," << cam_extrinsics.m[2][2] << "," << cam_extrinsics.m[3][2] << ","
<< cam_extrinsics.m[0][3] << "," << cam_extrinsics.m[1][3] << "," << cam_extrinsics.m[2][3] << "," << cam_extrinsics.m[3][3] << "\n";
fileExtrinsics2.close();`
since this is a static transformation, storing it one time would be enough to reason about the world coordinates, i was wondering if you have already calculated this static transformation?
best Regards
Yes I managed to do it without calculating it outside of the application. You can actually use the SpatialLocator
used to locate the RigNode in the world and use it to locate the coordinate system you get from the RGB Camera frame without going via world coordinates. Good luck!
Hi everybody!
For my application I need to get the transformation matrices from the IMU sensors to the RGB Camera. I have looked at the way this is done in the StreamRecorder project and it seems that I would have to go via IMU -> RigNode -> World -> Camera. Now I have two questions:
SpatialLocator
for the RigNode which could be used to locate the coordinate system of a camera frame at a certain timestamp which should (in theory) be more precise then via the world coordinates. My application works in real time so this would also be my preferred way since I (hopefully) would only have to calculate this transformation once and then store it (since the RigNode and the Camera have a fixed position relative to each other). I have tried and failed to implement this, which leads me to my second question:SpatialLocator
the same way as in the StreamRecorder sample withm_locator = winrt::Windows::Perception::Spatial::Preview::SpatialGraphInteropPreview::CreateLocatorForNode(guid);
, however this locator never changes its locatability toPositionalTrackingActive
and staysUnavailable
. The same also occurs if I initialize the locator with the default onem_locator = winrt::Windows::Perception::Spatial::SpatialLocator::GetDefault()
. I have enabled the SpatialPerception capability for my project but that's all I have done for the spatial perception. Am I missing something important?Any help would be greatly appreciated.