DLR-RM / 3DObjectTracking

Algorithms and Publications on 3D Object Tracking
MIT License
624 stars 121 forks source link

How to make tracking more robust when object or camera is moved? #30

Closed Divyam10 closed 1 year ago

Divyam10 commented 1 year ago

Is there a way to make ICG Tracking more robust with respect to object or camera movement?

Are there any parameters I could change so that the object does not lose tracking due to fast movements? Or changes in calibration of the depth camera used? I am using Realsense Camera D435.

I am trying to replicate the results similar to the real-world experiment in your video but the tracker mismatches as soon as the object moves at a normal speed or a fast speed. If I move the object slowly it works just fine.

Thanks!

zhangbaozhe commented 1 year ago

I have changed a parameter and it gave me promising tracking results. You can check the parameter here https://github.com/DLR-RM/3DObjectTracking/blob/ef0b302feb41472c5974d47d5017f6920cae7cf6/ICG/include/icg/depth_modality.h#L43

If you increase the search distance, the computation speed will be slower.

I believe you can also try to change this parameter https://github.com/DLR-RM/3DObjectTracking/blob/ef0b302feb41472c5974d47d5017f6920cae7cf6/ICG/include/icg/region_modality.h#L59

As proposed in the SRT3D paper, this can change the corresponding line length(if not, please correct me).

By the way, you may also change the max depth in some header files.

samuelmorais commented 1 year ago

The tests I made with RBGT (previous version), showed that faster CPUs affect the tracking because the system can compute more frames per second and so that the difference between each frame is smaller even if you move fast.

Divyam10 commented 1 year ago

The tests I made with RBGT (previous version), showed that faster CPUs affect the tracking because the system can compute more frames per second and so that the difference between each frame is smaller even if you move fast.

I can confirm this on testing

Divyam10 commented 1 year ago

I have changed a parameter and it gave me promising tracking results. You can check the parameter here

https://github.com/DLR-RM/3DObjectTracking/blob/ef0b302feb41472c5974d47d5017f6920cae7cf6/ICG/include/icg/depth_modality.h#L43

If you increase the search distance, the computation speed will be slower.

I believe you can also try to change this parameter

https://github.com/DLR-RM/3DObjectTracking/blob/ef0b302feb41472c5974d47d5017f6920cae7cf6/ICG/include/icg/region_modality.h#L59

As proposed in the SRT3D paper, this can change the corresponding line length(if not, please correct me).

By the way, you may also change the max depth in some header files.

Yup I am dabbling with these parameters and getting better results. For the next experiment I will try to generate more viewpoints.

manuel-stoiber commented 1 year ago

As already suggested, please adjust the scales parameter in the RegionModality and the considered_distances parameter in the DepthModality to increase the considered area. You can check if correspondence lines are big enough using the visualization visualize_lines_correspondence. For the DepthModality, you can use visualize_correspondences_correspondence. More viewpoints will not have an effect on the maximum frame-to-frame pose difference.

In general, the tracker should be fast enough to consider every frame that is provided by the camera. Limitations are typically the framerate of the camera or the visualization on your screen (which requires rendering the object in the resolution of your camera) not the tracking. Currently, visualizations are updated for each new frame. If you have a camera that supports very high framerates and visualization is a problem, I would suggest to change the code and update visualizations in a separate thread.