Open czeyveli1 opened 1 month ago
could you please show the error message? So far I could see is to pay attention on encode format of depth image. Like if you treat it as a uint8 rgb image and implement a median filter, it will lose accuracy. But for details you should post the error
I solved the problem using scikit image save function as skimage.io.imsave
instead of opencv library but I do not know what is the differences between the two functions.
I think the problem is that opencv reads images as BGR and also reads the image as 3 channels. And also, when I use the save function, the size of the image is changes.
Now, I have a new challenge. I try to use https://github.com/DepthAnything/Depth-Anything-V2 package to create a new depth dataset from RGB files but I still cannot render the output of the depth dataset.
My error is:
FEngine (64 bits) created at 0xbfeb6a0 (threading is enabled)
FEngine resolved backend: OpenGL
MonoGS: Resetting the system
MonoGS: Initialized map
Traceback (most recent call last):
File "/home/cz/Documents/MonoGS/slam.py", line 252, in <module>
slam = SLAM(config, save_dir=save_dir)
File "/home/cz/Documents/MonoGS/slam.py", line 110, in __init__
self.frontend.run()
File "/home/cz/Documents/MonoGS/utils/slam_frontend.py", line 349, in run
eval_ate(
File "/home/cz/Documents/MonoGS/utils/eval_utils.py", line 106, in eval_ate
ate = evaluate_evo(
File "/home/cz/Documents/MonoGS/utils/eval_utils.py", line 29, in evaluate_evo
traj_est_aligned = trajectory.align_trajectory(
File "/home/cz/anaconda3/envs/MonoGS/lib/python3.10/site-packages/evo/core/trajectory.py", line 393, in align_trajectory
r_a, t_a, s = geometry.umeyama_alignment(traj_aligned.positions_xyz.T,
File "/home/cz/anaconda3/envs/MonoGS/lib/python3.10/site-packages/evo/core/geometry.py", line 64, in umeyama_alignment
raise GeometryException("Degenerate covariance rank, "
evo.core.geometry.GeometryException: Degenerate covariance rank, Umeyama alignment is not possible
Traceback (most recent call last):
File "/home/cz/Documents/MonoGS/slam.py", line 252, in <module>
slam = SLAM(config, save_dir=save_dir)
File "/home/cz/Documents/MonoGS/slam.py", line 110, in __init__
self.frontend.run()
File "/home/cz/Documents/MonoGS/utils/slam_frontend.py", line 349, in run
eval_ate(
File "/home/cz/Documents/MonoGS/utils/eval_utils.py", line 106, in eval_ate
ate = evaluate_evo(
File "/home/cz/Documents/MonoGS/utils/eval_utils.py", line 29, in evaluate_evo
traj_est_aligned = trajectory.align_trajectory(
File "/home/cz/anaconda3/envs/MonoGS/lib/python3.10/site-packages/evo/core/trajectory.py", line 393, in align_trajectory
r_a, t_a, s = geometry.umeyama_alignment(traj_aligned.positions_xyz.T,
File "/home/cz/anaconda3/envs/MonoGS/lib/python3.10/site-packages/evo/core/geometry.py", line 64, in umeyama_alignment
raise GeometryException("Degenerate covariance rank, "
evo.core.geometry.GeometryException: Degenerate covariance rank, Umeyama alignment is not possible
were u able to fix this? i also want to replace the depth images with the depth maps created using anydepth_v2. do let us know the results too
I still get the same error...
I have encountered the same error
have you checked the encode format and depth scale of generated depth image?
@UltraHertzz : i clicked my pics using iphone 15. and in the config file there is a parameter for depth_scale. i have not changed it, its still 5000. Can u help me to let me know how to extract depth scale of such images? i clicked rgb images using iphon 15 in potrait mode and then extracted the depth map of these images
it depends on the sensors. The simplest and straight forward way is taking a depth image in front of a wall about 1 m, print the depth value in the image, usually it should be 1 or 1000 for m or mm unit. TUM RGB-D dataset use 5000 as depth scale due to sensor setup. You can print the mean depth value of it, it will show like 4000 to 5000, which is non sense in any measure unit without a modifier.
@czeyveli1 hey, did u solve this or were you able to find a better solution?
Hello everyone; I try to do some denoising process on the TUM RGBD depth dataset.
I take the depth dataset, processing it through a median filter, and then copying it in place of the original depth dataset. After doing this, when I run the code, the depth dataset does not appear, and the system gives an error. Is there anyone who can help me resolve this issue?