CogRob / distributed-mapper

This library is an implementation of the algorithm described in Distributed Trajectory Estimation with Privacy and Communication Constraints: a Two-Stage Distributed Gauss-Seidel Approach.
https://cognitiverobotics.github.io/distributed-mapper/
110 stars 32 forks source link

Confusing about the algorithms #5

Closed xuhao1 closed 2 years ago

xuhao1 commented 3 years ago

Hello, paper "Distributed Trajectory Estimation with Privacy and Communication Constraints: a Two-Stage Distributed Gauss-Seidel Approach" describe the two stage distrbuted pose graph optimization is based on Distributed Gauss Seidel(or JOR/SOR), the core iteration of the DGS(SOR/JOR) for solving the linear equation is equation (18) and (19) in paper page 8. However, what's I found in the repo is, at line

  newLinearizedPoses_ = distGFG.optimize();

, a GaussianFactorGraph is adopted to solve the local linear equation with variable elimination. Then the local poses are updated by y_(k+1) = (1-gamma)yk+gamma y(k+1) with code

          linearizedPoses_.at(key) = (1-gamma_)*linearizedPoses_.at(key) + gamma_*newLinearizedPoses_.at(key);

at line

My question is, why these codes equal to function (18)-(19)?

itzsid commented 3 years ago

Hi @xuhao1,

The function you should look at is: https://github.com/CogRob/distributed-mapper/blob/a3bbfa9ea8e73fd09460f8ec008f3e8cc4940506/distributed_mapper_core/cpp/src/DistributedMapperUtils.h#L528

This function called from: https://github.com/CogRob/distributed-mapper/blob/master/distributed_mapper_core/cpp/scripts/runDistributedMapper.cpp#L265

As specified here: https://github.com/CogRob/distributed-mapper/blob/a3bbfa9ea8e73fd09460f8ec008f3e8cc4940506/distributed_mapper_core/cpp/src/DistributedMapperUtils.h#L254

Depending on the gamma value and update type, the algorithm can be switched from Jacobi Over-relaxation, Successive Over-Relaxation, Jacobi or Gauss-Seidel. By default, Distirbuted Gauss-Seidel is used.

        /*  Distributed Jacobi: updateType_ = postUpdate, gamma = 1
      *  Gauss Seidel: updateType_ = incUpdate, gamma = 1
      *  Jacobi Overrelax: updateType_ = postUpdate, gamma != 1
      *  Succ Overrelax: updateType_ = incUpdate, gamma != 1

Hope this helps.

xuhao1 commented 3 years ago

Hi @itzsid Thanks for you reply! I noticed these functions and understand the gamma value and update type switches the Jacobi Over-relaxation, Successive Over-Relaxation, Jacobi or Gauss-Seidel. My questions is, where exactly this cap part is performed. I mean where I can found the implentation of this summation

c2
itzsid commented 3 years ago

Equation 19 is equivalent to estimating rotation or translation, depending on which stage of the two-stage is being optimized. Rotation optimization is called here: https://github.com/CogRob/distributed-mapper/blob/a3bbfa9ea8e73fd09460f8ec008f3e8cc4940506/distributed_mapper_core/cpp/src/DistributedMapperUtils.h#L244

and implemented here: https://github.com/CogRob/distributed-mapper/blob/a3bbfa9ea8e73fd09460f8ec008f3e8cc4940506/distributed_mapper_core/cpp/src/DistributedMapper.cpp#L98.

The equation inside the bracket corresponds to all the communication from the neighboring robots. Some of those robots are already updated in the current iteration (y^(k+1)) and others are not updated (y^k). We take the latest estimate of all those robots along with the measurement constraints given by H to optimize each robot. So, each robot's optimization given updated constraints from neighboring robots solves the full equation 19 and not the summation separately.

The slides here might help: https://itzsid.github.io/publications/web/icra16/presentation.pdf

xuhao1 commented 3 years ago

@itzsid Thanks again for your reply! I watched your slides and understand the equivalent now. Is the reason for using optimization instead of directly summation separately and dividing by H{\alpha\alpha} matrix is to avoid the case where H{\alpha\alpha} matrix is irreversible?

itzsid commented 3 years ago

@xuhao1, I did it mostly due to the convenience of factor graph framework (its easier to use GaussianFactorGraph than to solve the linear system myself). If you can invert H{\alpha\alpha}, it can be cached and stored since it won't change throughout the optimization. Only the estimates from other robots (y\beta) changes as the optimization progresses. Although I haven't tested it, I think it should work.

xuhao1 commented 3 years ago

@itzsid Actually, I have tested directly using Equation (19) on a 4 DoF pose graph estimation in my implementation. It works when the initial error is not too big but still has some convergence issue on large scale problem (that is why I am reading your code and try to figure out the difference).