Closed Metacadx closed 3 weeks ago
Hi @Metacadx!
Could you clarify: do you want to A) Get the position of spheres attached to a tool? B) Or just individual infrared reflective spheres in the scene, even if they're not attached to a tool?
If it's the first one, then it should be completely possible, but you'll need to write some extra code on the DLL side.
In DINO-DLL, each TrackedTool
or tool is defined with the following struct:
struct TrackedTool
{
uint8_t ID = 255; /*!< 8-bit tool ID, should be uniquely assigned */
bool VisibleToHoloLens = false; /*!< Flag whether tool visible in last seen frame */
std::vector<Eigen::Vector3d> GeometryPoints; /*!< Known coordinates of tool - right-handed marker positions (from CAD/config files) */
std::vector<Eigen::Vector3d> ObservedPoints_World; /*!< Marker positions defined in world-frame (defined as startup pose)*/
std::vector<Eigen::Vector3d> ObservedPoints_Depth; /*!< Observed tool marker points in depth - sensor frame (should be the same order as GeometryPoints) */
Eigen::Matrix4d PoseMatrix_HoloWorld; /*!< 4x4 transform matrix of tool pose in world frame */
Eigen::Matrix4d PoseMatrix_DepthCamera; /*!< 4x4 transform matrix of tool pose w.r.t depth sensor frame*/
std::vector<cv::Point2i> ObservedImgKeypoints; /*!< Image coordinates for marker-centres for labelling (same order as GeometryPoints) */
};
The field you're probably interested in is ObservedPoints_World
.
At the moment, a double array is passed from DINO-DLL to DINO-Unity to describe the pose/visibility of individual tools, the functions that are relevant for this are as follows:
What I'd suggest you do is review the functions above, and then create your own version of these functions to expose the ObservedPoints_World
field of each tool.
GetTrackedToolsPoseMatrices()
is the function that will make the data accessible to Unity, and the other two functions prepare the data.
So in your new version of SerializeToolDictionary
, you would want to dump the marker locations of each tool into a double array INSTEAD of the PoseMatrix_HoloWorld
field as is currently done.
Hello,
Thank you for your help! I managed to access the data using collectedPoints within the TryUpdatingToolDictionary function.
I'm currently focusing on a specific tool and encountering an issue: at certain ang les, the tool's transform becomes noisy and inaccurate. My idea is to process each point individually and construct a plane from them. I would then use the normal of this plane to simulate the orientation of the needle, as shown in this picture.
Do you think this would be a good approach, or do you have any other suggestions?
@Metacadx Are you 3-D printing the tool your spherical markers are attached to? My guess is that the inter-marker distances might be a bit too similar and also as they are quite close together, it would make sense that at certain angles the readings become quite noisy.
I'm definitely not an expert on tool design for these applications but would potentially suggest creating a slightly larger tool and ensuring that if you create a distance matrix of the inter-marker distances, each entry should be unique and not too similar to any other entries.
Unfortunately I can't think of a simple software-only solution, as I'm guessing most of your challenges will be related to noisy data at particular angles, so if re-designing your tool is an option, that's certainly what I'd suggest. Hope that helps!
P.S. as you're using spheres, hopefully you've come across how to configure your setup for spherical marker tracking, if not check the issue link
Yes, I am 3D printing the tool, and I can create a larger version with a greater distance between each marker. It might sound strange, but I'm getting better results without using the spherical marker configuration. With the experimental setup, I'm facing a significant position offset. However, without the spherical configuration, the model aligns perfectly, but the orientation is still an issue.
Anyway, thank you for your help! One last thing before I go — Microsoft warns that using research mode can cause performance issues. I've noticed that holograms are not stable and tend to shift slightly if I move, even though the application is running at 60 FPS. Have you encountered any problems like this?
That is certainly strange for your first point. Could certainly be a bug in the code, but might depend on your config file as well I guess. Thanks for the feedback in any case!
For instability, bear in mind that we're using raw sensor data to compute the pose in the DLL side, and there's no filtering being done. As we use the rig transform (HoloLens current pose in the world frame) to compute the final tool pose in the HoloLens's world frame, that means any time you move your head, you would alter the rig transform, and this also introduces some noise in the estimated tool pose as well. However it shouldn't be massive, and anecdotally I would expect the hologram/model to still align or settle to the same spot.
You'll probably get a smoother position if you write in some filtering code on the Unity or DLL side. The simplest thing to do would just be a moving average filter on the Unity side (to smooth your position and orientation of the virtual model) to try and minimise some of the jitter.
You could check out how this other project by Andreas uses a series of Kalman filters per tool for inspiration.
Actually i am no talking about the hologram of tracked tool but any other hologram in the scene that was supposed to be static
For accuracy problem is there any calibration for the camera that we need to make ?
Actually i am no talking about the hologram of tracked tool but any other hologram in the scene that was supposed to be static
I hadn't seen this behaviour before sorry, so this would be any other GameObject in Unity which is not related to anything to do with UnityToolManager.cs
I'm guessing. Out of interest, could you confirm what version of Unity you're using?
For accuracy problem is there any calibration for the camera that we need to make ?
On the DLL-side, at the moment, we're using the Research Mode API to convert from pixel coordinates to a 3D location on a unit plane. You can see the relevant sections of code:
In this section of code, you can see how we define a lambda function to interact with the Research Mode API.
In this link we call the lambda.
If you were to do a calibration of the camera, (I'm guessing a camera calibration to estimate intrinsic parameters from your phrasing?) you'd replace this, so you'd use your calibration parameters instead of the call to MapImagePointToCameraUnitPlane
. It'll depend on the quality of your calibration as to whether this will actually introduce some improvements to the accuracy.
Thank you for your response! I've been testing with newly printed models since yesterday, and the tracking has improved. However, I'm still encountering the same issue when the object moves further away from the camera. Is there a quick way in C++ to set a distance limit to stop tracking the object and invalidate the blob once it exceeds a certain distance from the camera?
I am using unity 2021.3.18f1
Thank you for your response! I've been testing with newly printed models since yesterday, and the tracking has improved. However, I'm still encountering the same issue when the object moves further away from the camera. Is there a quick way in C++ to set a distance limit to stop tracking the object and invalidate the blob once it exceeds a certain distance from the camera?
Nothing quick I can think of, but you should be able to write in that logic yourself certainly once you compute the pose of a tool? Or perhaps in the loop where you convert from pixel locations to 3D vectors
Closing the issue for now as I'm guessing it's been resolved, feel free to re-open a new one if needed
hello , i would like to thank you for your amazing work ! and i would like to know if it possible to get, in unity , the position a spheres returned by the sensors ? if yes , is there any modification i need to make on the c++ library and make a new function that returns those raw values ?