JeffLIrion / python-graphslam

Graph SLAM solver in Python
MIT License
193 stars 27 forks source link

PGO results #95

Open VladimirYugay opened 3 months ago

VladimirYugay commented 3 months ago

Hey there. Thanks for your work - a very nice tested implementation.

I'm facing issues with the results of PGO. My setup:

  1. I have quite an accurate odometry estimation
  2. My loop closure detection algorithm seems to be working fine as well - frames participating in the loops look very similar
  3. For every loop, I estimate the relative transformation between point clouds for the corresponding frames. I have backward loop edges, where source_vertex_id > target_vertex_id. The estimated transformation aligns source point cloud to the target point cloud when applied to the source point cloud.
  4. I set the information matrix as identity everywhere.

When I optimize my pose graph, the chi square goes from 4ish to 0ish pretty fast. However, the resulting graph is quite bad - the poses become even further from the ground-truth.

Now I'm pretty sure that separate components work, but I can't be confident that I construct my graph correctly. I attach the g2o (with txt extension to allow github to load it) file before the optimization here.

est.txt

JeffLIrion commented 3 months ago

I think there's something wrong with your loop closure edges.

import os

from graphslam.graph import Graph

g0 = Graph.from_g2o(os.path.expanduser("~/Downloads/est.txt"))
g0.plot()  # est.txt original

g = Graph.from_g2o(os.path.expanduser("~/Downloads/est.txt"))
g.optimize()
g.plot()  # est.txt optimized

# Remove loop closure edges
vertices = g0._vertices[:]
edges = [e for e in g0._edges[:] if e.vertex_ids[0] + 1 == e.vertex_ids[1]]
edge_vertex_ids = {v_id for e in edges for v_id in e.vertex_ids}
vertex_ids = {v.id for v in vertices}
assert len(edge_vertex_ids) == len(vertex_ids)
assert all(v_id in vertex_ids for v_id in edge_vertex_ids)
g_no_loops = Graph(edges, vertices)
g_no_loops.plot()  # no loops original
g_no_loops.optimize()
g_no_loops.plot()  # no loops optimized
Original Optimized
est.txt original original_optimized
No loops no_loops no_loops_optimized
VladimirYugay commented 3 months ago

Thanks for the response. Yes, something was wrong with the way how I added loop constraints. I computed the constraints by estimating a relative pose between posed point clouds, while I should've computed them using not posed point clouds.

However, PGO results are still giving me worse results, although registrations for the loop edges are very accurate (with inlier rmse less than 0.02).

I also don't have information matrices in my system, may that be the reason for poor optimization results?

5e810ead-2bfe-4a0d-931f-5d3306345734

Here's the g2o file in case it's needed: est.txt. I can also provide more data if you have time for looking at it.

JeffLIrion commented 3 months ago

Using the identity matrix as the information matrix should be fine.

How accurate is the optimized graph when all loop closure edges are ignored, as I did in my code snippet above?

VladimirYugay commented 3 months ago

The optimized graph without the edges is almost the same as the original one:

image c2201ea4-945b-4b51-8cb4-cdf45747daca

Isn't optimizing the graph without any loop constraints meaningless since "no constraints ~ no need to change"?

In my case, it seems that odometry is so good already, that constraints added to the pose graph should be extremely good as well. However, I don't have a reliable way of measuring how good they are. Relying on rmse of correspondences after aligning point clouds doesn't always work. Sometimes it gives a small improvement, sometimes it doesn't.

Another question is after one optimizes the posegraph consisting of keyframes, to evaluate the whole trajectory not only at the keyframes, should the poses between the keyframes be interpolated?

JeffLIrion commented 3 months ago

I think it's a problem with your loop closure estimates. If I

  1. Optimize without loop edges
  2. Compute new estimates for the loop edges such that their error is 0
  3. Reload the graph and set the estimates for the loop edges that I computed in step 2

Then the result from optimizing this graph is the same as the result from step 1, which is the expected result.

I suggest investigating how these estimates differ from the estimates that you computed.

est_updated_loop_edges

import os

import numpy as np

from graphslam.graph import Graph

g_no_loops = Graph.from_g2o(os.path.expanduser("~/Downloads/est.txt"))

# Set information matrices for all loop closure edges to zero and optimize
for e in g_no_loops._edges:
    if e.vertex_ids[0] + 1 != e.vertex_ids[1]:
        e.information = np.zeros((6, 6))

g_no_loops.optimize()

# Reload the graph and use g_no_loops to compute new estimates for the loop closure edges
g_updated_estimates = Graph.from_g2o(os.path.expanduser("~/Downloads/est.txt"))
for e0, e in zip(g_no_loops._edges, g_updated_estimates._edges):
    if e.vertex_ids[0] + 1 != e.vertex_ids[1]:
        e0.estimate = e0.vertices[1].pose - e0.vertices[0].pose
        assert np.linalg.norm(e0.calc_error()) < 1e-5
        e.estimate = e0.estimate

# Optimize with these updated estimates
g_updated_estimates.optimize()
g_updated_estimates.plot()

# Compare g_no_loops and g_updated_estimates
for v0, v in zip(g_no_loops._vertices, g_updated_estimates._vertices):
    assert np.linalg.norm((v.pose - v0.pose).to_compact()) < 1e-5
VladimirYugay commented 3 months ago

Investigated what was the difference between my constraints and e.g. garage constraints - there was a coordinate system mismatch which I fixed.

However, I'm still getting unstable results. For this example, my odometry is again very accurate.

eaba99f1-5aee-4836-8b42-fb283e995d3a

I added exactly one loop edge whose transformation is very close to the transformation computed from the odometry edges which suggests that it should cause very little change to the pose graph after optimization:

np.set_printoptions(precision=3, suppress=True)
for edge in pose_graph._edges:
    u, v = edge.vertex_ids
    if abs(u - v) > 1:
        obs_rel = edge.estimate.to_array()
        odom_rel = (pose_graph._vertices[v].pose - pose_graph._vertices[u].pose).to_array()
        print(obs_rel, odom_rel, u, v, np.linalg.norm(obs_rel - odom_rel), np.linalg.norm(odom_gt - obs_rel))
print(count)

however, optimizing the graph leads to the following:

773db915-56bd-452d-9afd-f03ae0857f3a

It is a bit hard to understand why this happens. g2o_graph.txt

JeffLIrion commented 3 months ago

The loop edge that you added has enormous error, compared to all the other edges:

The two largest chi^2 errors are: 1.606192185723634e-05, 0.0021700534520831395

errors

import os

import matplotlib.pyplot as plt
import numpy as np

from graphslam.graph import Graph

g = Graph.from_g2o(os.path.expanduser("~/Downloads/g2o_graph.txt"))

np.set_printoptions(precision=3, suppress=True)

count = 0
for e in g._edges:
    u, v = e.vertex_ids
    if abs(u - v) > 1:
        count += 1
        obs_rel = e.estimate.to_array()
        odom_rel = (e.vertices[1].pose - e.vertices[0].pose).to_array()
        print("u = {}, v = {}".format(u, v))
        print()
        print(" obs_rel =", obs_rel)
        print("odom_rel =", odom_rel)
        print()
        print("obs_rel - odom_rel =", obs_rel - odom_rel)
        print("    e.calc_error() =", e.calc_error())
        print()
        print("||obs_rel - odom_rel|| =", np.linalg.norm(obs_rel - odom_rel))
        print("    ||e.calc_error()|| =", np.linalg.norm(e.calc_error()))
        print()
        print("e.calc_chi2() =", e.calc_chi2())

print("\ncount = {}".format(count))

chi2s = sorted(e.calc_chi2() for e in g._edges)

print("The two largest chi^2 errors are: {}, {}".format(chi2s[-2], chi2s[-1]))

plt.plot(chi2s)
plt.title("$\\chi^2$ error for the edges")
plt.show()