MRPT / GSoC2016-discussions

See README for instructions
7 stars 2 forks source link

Design and developement of a full graph-based SLAM strategy in mrpt (By: bergercookie) #2

Closed jlblancoc closed 8 years ago

jlblancoc commented 8 years ago

Initial description The proposal at hand describes the working plan for developing a graph-based SLAM algorithm using the mrpt toolkit in the following summer. The suggested strategy builds upon the ideas developed by (Olson, 2009) for the data association and robust loop closing parts, while using already implemented schemes (mrpt version of Levenberg Marquardt, iSAM solver) for dealing with the optimization part. The result by the end of summer should be a fully functional, robust and usuable graph-based SLAM algorithm ready to be used by researchers and roboticists. We provide an overal timeline, with intermediate milestones to make sure that we meet the goals of the project. The proposal begins with an introduction to the graph-based formulation and lists already implemented and successful graphSLAM strategies. The following two sections (implementation details, timeline) describe the graphSLAM algorithm that we suggest. The final section provides with general information about the author and his background in simultaneous localization and mapping and open-source programming projects

Progress: All comments below reflect the interactions between the GSoC student and the mentor(s).

Final results:

EduFdez commented 8 years ago

Hi @bergercookie

If you look at the estimated trajectory by the application, for instance in the dataset range_030_bearing_015.rawlog, you can see that it looks like a zig-zag. This indicates that some edges which are being registered are not correct, otherwise the estimated trajectory would be smoother like the odometry one. Your CICPGoodness criterion should be analyzed to see where is the error and correct it.

About the app's architecture:

EduFdez commented 8 years ago

Also I get this compilation error from your last commit:

In file included from /home/edu/libraries/mrpt_nikos/apps/graphslam-engine/graphslam-engine_app.cpp:43:0: /home/edu/libraries/mrpt_nikos/apps/graphslam-engine/CLevMarqGSO.h:30:33: fatal error: CGraphSlamOptimizer.h: No such file or directory

maybe you forgot to commit the file?

bergercookie commented 8 years ago

On 17/07/16, Eduardo Fernandez-Moral wrote:

Hi @bergercookie

If you look at the estimated trajectory by the application, for instance in the dataset range_030_bearing_015.rawlog, you can see that it looks like a zig-zag. This indicates that some edges which are being registered are not correct, otherwise the estimated trajectory would be smoother like the odometry one. Your CICPGoodness criterion should be analyzed to see where is the error and correct it. Indeed, this problem exists, both for the node registration scheme as well as the edge registration scheme (e.g. running the application for the same dataset but using the CFixedIntervalsNRD, CICPGoodnessERD deciders, does work better but still there exist zig-zag edges due to the ICP operation) As discussed over Skype call, possible solutions to this would be to reject the ICP edge if its covariance matrix is ill-conditioned or to use the local odometry information to provide a better estimate. Furthermore, I think it would help if I could provide the ICP alignment operation between successive laser scans with an initial estimation of the ICP edge based on an extrapolation of the robot trajectory so far (e.g. using the last X accepted laserscans and corresponding timestamps to predict a logical estimation of the next ICP edge).

About the app's architecture:

• Keynames like NRD and ERD (NodeRegistrationDecider and EdgeRegistrationDecider) should be clearer in the code, so it's not a good idea to use acronyms for such important keywords. I used this notation just to make it clear which class/file is a node decider (NRD) which is an edge decider (ERD) and which a graphSLAM optimizer (GSO). Indeed at first sight it doesn't feel really descriptive but I think it's better than having really long suffixes in the class names like CFixedIntervalsNodeRegistrationDecider. As an alternative we can have a notation section in the class documentation so that the user can learn these acronyms the first time he takes a look at the library documentation. If that doesn't work we can move on to modifying the names of the classes/files.

• The main file of the app should be as summarized as possible. It's preferable that there is only one single call to: has_exited_normally = graph_engine.parseRawlogFile(); instead of several conditional ones using "if(condition)". Could you consider to put this conditions as parameters of "graph_engine" which would be got by the constructor (as others like rawlog_fname)?

I totally agree that having many if conditionals is not a sound implementation strategy. The problem with passing the selected (by the user) deciders, optimizers to CGraphSlamEngine is that CGraphSlamEngine is templated on NODE_REGISTRAR, EDGE_REGISTRAR, OPTIMIZER. I initially thought of making template arguments out of the deciders/optimizer strings (e.g. if the selected NRD is "CFixedIntervalsNRD" then CGraphSlamEngine would be instantiated with CFixedIntervalsNRD as the node registrar) but it appears that this is not supported in C++ (http://stackoverflow.com/questions/359237/why-does-c-not-have-reflection/359462#359462)

I will try to find a more elegant way of doing this nonetheless and keep you updated on the search. @jlblancoc do you think of an alternative approach on this? The relevant code can be found here: https://github.com/bergercookie/mrpt/blob/master/apps/graphslam-engine/graphslam-engine_app.cpp#L290-L401

• Also I think that the visualization should be created by the graph_engine object, instead of being created outside and passed as arguments to graph_engine. I think it's preferable to have a parameter display=true/ false. And depending on its value, the visualization may be created by CGraphSlamEngine_t's constructor. What do you think? Ok, sounds logical.

@jlblancoc just to keep you updated, during the next couple of days I am going to read your paper on Consistent Observation Grouping as well as Olson's paper to come up with the loop closure strategy to be implemented.

I have also added Doxygen docstrings in every major file that is going to appear in the libs/graphslam directory (c8a52ad) and I will modify them accordingly so that they are suitable for merging in the public API (Remove using directives from header files, modify the graphslam CMakeLists.txt file etc.)

With regards, Nikos

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub, or mute the thread.*

Nikos Koukis School of Mechanical Engineering National Technical University of Athens nickkouk@gmail.com +30 6985827375

jlblancoc commented 8 years ago

Great @bergercookie ! Let us know about any doubt you have on those papers.

On a much lower level: I've been playing around with your new COutputLogger class, and I have a pair of doubts about the design: 1) Everything around warnForLogConstMethod() and the const-vs non-const method... is intentional for some reason? I mean: if you have written all that just to avoid problems when calling log() inside user's const method, I think we could redesign it to allow calling log() from const. After all... one may perfectly want to log things from const methods!! If this is the case, I recommend you:

2) I see a bit artificial having to set m_curr_level continuously. I guess that is the result of offering a simple API call: log(msg). What do you think about simplifying it by forcing users to always explicitly especifying in the call which is the desired verbosity level of the msg, e.g. logXXX(LEVEL, msg);. In this way, the user code will be much clearer and less error prone (i.e. doesn't depend on the internal state of the logger object).

Let me know your thoughts on this... take a look at the latest commit in MPRT/master, where I mentioned you. You are free to do new forks + pull-requests to address any of these points.

Best.

bergercookie commented 8 years ago

On 17/07/16, Jose Luis Blanco-Claraco wrote:

Great @bergercookie ! Let us know about any doubt you have on those papers.

On a much lower level: I've been playing around with your new COutputLogger class, and I have a pair of doubts about the design: 1) Everything around warnForLogConstMethod() and the const-vs non-const method... is intentional for some reason? I mean: if you have written all that just to avoid problems when calling log() inside user's const method, I think we could redesign it to allow calling log() from const. After all... one may perfectly want to log things from const methods!! If this is the case, I recommend you:

• Deleting all the duplicated const / non-const methods, leaving only the const declarations, but with the implementation body of the non-const methods. • You'll have build errors, ok, I know! Just declare the internal members you are trying to access as "mutable". That C++ keyword was invented exactly for this purpose ;-) You are right on this. I hadn't actually thought of the idea of mutable class members! After all logging logging from const functions shouldn't be considered an ill-advised behavior.

2) I see a bit artificial having to set m_curr_level continuously. I guess that is the result of offering a simple API call: log(msg). What do you think about simplifying it by forcing users to always explicitly especifying in the call which is the desired verbosity level of the msg, e.g. logXXX(LEVEL, msg);. In this way, the user code will be much clearer and less error prone (i.e. doesn't depend on the internal state of the logger object). Indeed, having the user explicitly specify the VerbosityLevel should be much cleaner as a design.

The initial idea was to set a Default logging level and then when the user wanted to log using a different VerbosityLevel he could specify it as a second argument using the log(std::string msg, VerbosityLevel lvl) method. However the newer, and more versatile, logFmt doesn't support the VerbosityLevel argument yet. To do that, the VerbosityLevel argument should be placed first in all logXXX calls due to the logFmt(std::string, ... ) signature.

Let me know your thoughts on this... take a look at the latest commit in MPRT/ master, where I mentioned you. You are free to do new forks + pull-requests to address any of these points.

With regards to the latest commits on the CoutputLogger* files

I will continue working on the points mentioned above and contributing to the COutputLogger class, right after GSoC.

Best.

With regards, Nikos — You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub, or mute the thread.*

Nikos Koukis School of Mechanical Engineering National Technical University of Athens nickkouk@gmail.com +30 6985827375

jlblancoc commented 8 years ago

Alright, no problem!

I'll probably then make more intensive testing of the COutputLogger class and modify it as needed so it can be exploited in more parts and apps of MRPT ;-)

So, before doing your final PR, take care of manually importing / cherry-picking-rebase / whatever my changes to yours to avoid conflicts and make things easier.

Cheers!

bergercookie commented 8 years ago

Hey @EduFdez, @jlblanco,

I reread the two papers relevant to the Loop-Closure strategy and I think I have a clear picture of the subject at hand in each one of them. Just to make sure we are on the same page, below I briefly describe the loop closing technique as implemented in Olson's paper:

On the other hand jlblanco's paper deals with partitioning the graph into consistent sets of nodes based on a minimum normalized cut criterion.

Based on the above, I would suggest implementing a combination of the above strategies as follows:

This case seems advantageous since the map partitioning module is already implemented in mrpt::slam::CIncrementalMapPartitioner and so I would just have to use it according to the application's needs.

Finally here are some questions with regards to Olson's paper:

How does this plan sound?

Looking forward to your feedback, Nikos

On 18/07/16, Jose Luis Blanco-Claraco wrote:

Alright, no problem!

I'll probably then make more intensive testing of the COutputLogger class and modify it as needed so it can be exploited in more parts and apps of MRPT ;-)

So, before doing your final PR, take care of manually importing / cherry-picking-rebase / whatever my changes to yours to avoid conflicts and make things easier.

Cheers!

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub, or mute the thread.*

Nikos Koukis School of Mechanical Engineering National Technical University of Athens nickkouk@gmail.com +30 6985827375

EduFdez commented 8 years ago

Hi @bergercookie

The plan sounds good! I agree on the strategy of doing the topological grouping with minNcut before searching for loop closures. I think that will help to improve the efficiency on checking potential hypothesis later on.

Regarding your questions:

_- In the image attached (from Olson's paper), the two lines of points make up for a group in which hypotheses are tested. However these hypotheses connect robot poses which have totally opposite orientations. If this is the case, how could a hypotheses pass the range sensor criterion in section 3.2.2 of Olson's paper since the robot at these poses look at completely different parts of the map (therefore there should be no real correspondence between the 2DRangeScans at these robot poses) Wouldn't make more sense if the robot traversed the same part of the environment but with the same orientation as the previous time it had been there? hypothesis_groups - In section 4.4 of Olson's paper, he sets a global sufficiency criterion for accepting the hypotheses group. Can this be implemented without any prior knowledge of the dataset/environment that SLAM runs in?_

That image is just an scheme: the robot passes by the same place and there might be a loop closure. Note also that many laser range scanners have very wide fields of view, 180, 270 and even 360 degrees. Then, it makes sense to look for loop closure on the way back.

- In section 4.4 of Olson's paper, he sets a global sufficiency criterion for accepting the hypotheses group. Can this be implemented without any prior knowledge of the dataset/environment that SLAM runs in?

The paper doesn't give clear information on how to implement this, just the general idea. I think it does not depend on the dataset/environment, but in the sensor used for pairing observations. So a laser sensor, a standard camera and a RGB-D camera should have all different implementations of such global sufficiency test. For the case of a 2D laser, the size of the covariance ellipse can be computed as the trace of the covariance matrix, which can directly compared with the size of the local region being checked (containing a set of observations with the match hypotheses). Such region can be represented as a set of 2-dimensional points, then, we could just compute an estimate of its size as the sum of the sides of a rectangle which contains all these observations. The global sufficiency should compare both amounts. What do you think?

Once we make this strategy work with the laser, we could describe possible implementations of the global sufficiency test for other sensors to have a guideline on how to implement them in the future if they are not programmed during this GSoC.

Eduardo

bergercookie commented 8 years ago

On 21/07/16, Eduardo Fernandez-Moral wrote:

Hi @bergercookie

The plan sounds good! I agree on the strategy of doing the topological grouping with minNcut before searching for loop closures. I think that will help to improve the efficiency on checking potential hypothesis later on.

Regarding your questions:

  • In the image attached (from Olson's paper), the two lines of points make up for a group in which hypotheses are tested. However these hypotheses connect robot poses which have totally opposite orientations. If this is the case, how could a hypotheses pass the range sensor criterion in section 3.2.2 of Olson's paper since the robot at these poses look at completely different parts of the map (therefore there should be no real correspondence between the 2DRangeScans at these robot poses) Wouldn't make more sense if the robot traversed the same part of the environment but with the same orientation as the previous time it had been there? hypothesis_groups - In section 4.4 of Olson's paper, he sets a global sufficiency criterion for accepting the hypotheses group. Can this be implemented without any prior knowledge of the dataset/environment that SLAM runs in?

That image is just an scheme: the robot passes by the same place and there might be a loop closure. Note also that many laser range scanners have very wide fields of view, 180, 270 and even 360 degrees. Then, it makes sense to look for loop closure on the way back.

  • In section 4.4 of Olson's paper, he sets a global sufficiency criterion for accepting the hypotheses group. Can this be implemented without any prior knowledge of the dataset/environment that SLAM runs in?

The paper doesn't give clear information on how to implement this, just the general idea. I think it does not depend on the dataset/environment, but in the sensor used for pairing observations. Having read the relevant part, I guess you are right on this one, prior knowledge of the environment shouldn't be necessary. So a laser sensor, a standard camera and a RGB-D camera should have all different implementations of such global sufficiency test. For the case of a 2D laser, the size of the covariance ellipse can be computed as the trace of the covariance matrix, which can directly compared with the size of the local region being checked (containing a set of observations with the match hypotheses). With regards to the size of the covariance ellipse, don't you think it is better if we compute its properties using the covariance matrix singular values (e.g. like in this case

However I am not entirely sure on how to generate the relevant covariance matrix in this case. Apparently it's not just the covariance matrix of the last 2DRangeScan. Since I think this models the positional uncertainty of the robot, we could compose all the relative edges from the root of the graph up to the last graph node, right? This could be implemented using the Dijkstra Projection method from p.6 of Olson's paper which computes the least uncertainty from a given node (i.e the root to any other node in the graph).

Such region can be represented as a set of 2-dimensional points, then, we could just compute an estimate of its size as the sum of the sides of a rectangle which contains all these observations. The global sufficiency should compare both amounts. What do you think? Ok, sounds logical.

Once we make this strategy work with the laser, we could describe possible implementations of the global sufficiency test for other sensors to have a guideline on how to implement them in the future if they are not programmed during this GSoC.

Eduardo

Ok sounds like a plan. :) I will start by experimenting with the CIncrementalMapPartitioner class and then move on to Olson's strategy.

P.S. I will be developing the rest of the graphslam code in the graphslam-devel branch of my github fork. This way I can keep my commits seperately from the upstream commits, thus it will be easier to merge them in the final pull-request.

With regards, Nikos

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub, or mute the thread.*

Nikos Koukis School of Mechanical Engineering National Technical University of Athens nickkouk@gmail.com +30 6985827375

EduFdez commented 8 years ago

HI @bergercookie

About how to generate the covariance matrix, you have to propagate the covariance of the poses through the "shortest" path, as suggested by Olson. Let's suppose that you want to check for a possible loop closure between your current pose C and a previous pose A, and that the shortest path has only 2 edges: AB and BC. You have the covariances of such edges given by ICP (or even a combination of ICP, odometry, etc.) and you just need the relative covariance AC. The standard approach for this problem is to compute a linear approximation of such covariance. This was very nicely documented by @jlblancoc in his technical report https://pixhawk.org/_media/dev/know-how/jlblanco2010geometry3d_techrep.pdf , see section 5. So, the resulting covariance AC is what you call the relevant covariance matrix you are looking for. And the same applies for paths longer than 2 edges. Let me know it's not clear yet.

With regards to the size of the covariance ellipse, don't you think it is better if we compute its properties using the covariance matrix singular values (e.g. like in this case

Yes, I think that's even better. The point is to get 2 numbers that represent region sizes and have a similar magnitude so that they can be directly compared.

P.S. I will be developing the rest of the graphslam code in the graphslam-devel branch of my github fork. This way I can keep my commits seperately from the upstream commits, thus it will be easier to merge them in the final pull-request.

Yes, that's the way to do it! ;-)

Cheers, Eduardo

bergercookie commented 8 years ago

Hi @EduFdez, @jlblancoc,

Over the last week I worked mostly on boilerplate code towards implementing the loop closing scheme (see CLoopCloserERD class). More specifically I implemented the following features:

P.S. With regards to the problem with the inaccurate edges registered in high noise datasets, I made use of the odometry information (when this is available) for dealing with this. More specifically, since the odometry information is accurate on a local basis, if the current ICP edge diverges significantly from the corresponding odometry edge, the odometry edge is going to be picked instead (with a sample fixed covariance matrix). This helps in smoothening the estimated robot trajectory when the robot takes a turn. To test whether to use the odometry information, the mahalanobis distance between the odometry edge and the ICP edge is calculated, odometry is used if the mahal. distance surpasses a certain threshold. I used both fixed and adaptive thresholds for dealing with this with the former having better results if set correctly. However this threshold very much depends on the noise of the dataset. Given enough time after the loop closure is done, I will try and implement an adaptive threshold based on a more suitable metric (e.g. Turkey's fence - http://stats.stackexchange.com/a/90766/76756 ).

Looking forward to your feedback, Nikos

On 21/07/16, Eduardo Fernandez-Moral wrote:

HI @bergercookie

About how to generate the covariance matrix, you have to propagate the covariance of the poses through the "shortest" path, as suggested by Olson. Let's suppose that you want to check for a possible loop closure between your current pose C and a previous pose A, and that the shortest path has only 2 edges: AB and BC. You have the covariances of such edges given by ICP (or even a combination of ICP, odometry, etc.) and you just need the relative covariance AC. The standard approach for this problem is to compute a linear approximation of such covariance. This was very nicely documented by @jlblancoc in his technical report https://pixhawk.org/_media/dev/know-how/ jlblanco2010geometry3d_techrep.pdf , see section 5. So, the resulting covariance AC is what you call the relevant covariance matrix you are looking for. And the same applies for paths longer than 2 edges. Let me know it's not clear yet.

With regards to the size of the covariance ellipse, don't you think it is
better if we compute its properties using the covariance matrix singular
values (e.g. like in this case

Yes, I think that's even better. The point is to get 2 numbers that represent region sizes and have a similar magnitude so that they can be directly compared.

P.S. I will be developing the rest of the graphslam code in the
graphslam-devel branch of my github fork. This way I can keep my commits
seperately from the upstream commits, thus it will be easier to merge them
in the final pull-request.

Yes, that's the way to do it! ;-)

Cheers, Eduardo

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub, or mute the thread.*

Nikos Koukis School of Mechanical Engineering National Technical University of Athens nickkouk@gmail.com +30 6985827375

EduFdez commented 8 years ago

Hi @bergercookie

Good job with the visualization of the map partitioning! :) But I cannot see much from the images you refer to:

Pictures of the partitioning are given below. As you can see the partitioner detects both correct and (most probably) wrong potential edges: full_partitioning_strategy possible_loop_closures_to_investigate found_correct_loop_closure

I cannot differentiate between correct and (most probably) wrong potential edges, and possible and correct loop closures. It will be easier for me to run the application directly and to pause to see the visualization in 3D. I cannot compile though, you forgot to add the file CEdgecounter.h in your commits.

OK on your work for Dijkstra projection. Tell me when your new incremental scheme is working, please.

One problem that arises is when dealing with rawlog files in observation-only form. If I am correct the CObservationOdometry format doesn't provide the covariance matrix of the transformation, so in order to make the projection method to work, a sample covariance should be assigned to this edge manually. I chose to provide a poor covariance matrix (small information-form matrix trace) as I don't have any knowledge of it. This way, the Dijkstra projection algorithm uses the ICP edges when possible instead of the odometry edge.

P.S. With regards to the problem with the inaccurate edges registered in high noise datasets, I made use of the odometry information (when this is available) for dealing with this. More specifically, since the odometry information is accurate on a local basis, if the current ICP edge diverges significantly from the corresponding odometry edge, the odometry edge is going to be picked instead (with a sample fixed covariance matrix). This helps in smoothening the estimated robot trajectory when the robot takes a turn. To test whether to use the odometry information, the mahalanobis distance between the odometry edge and the ICP edge is calculated, odometry is used if the mahal. distance surpasses a certain threshold. I used both fixed and adaptive thresholds for dealing with this with the former having better results if set correctly. However this threshold very much depends on the noise of the dataset. Given enough time after the loop closure is done, I will try and implement an adaptive threshold based on a more suitable metric (e.g. Turkey's fence-http://stats.stackexchange.com/a/90766/76756 ).

Yes, you can just try to set a logical heuristic value for the odometry edge covariances. Note that, for higher accuracy, such values could also be obtained from statistical analysis of the dataset since we also have the groundtruth. But this is not a priority task at all, we can work with the fixed covariances you assumed.

One point about detecting wrong local ICP edges. If they are wrong, their covariance matrix is also wrong. So, when using the odometry to verify this, you need to use the odometry's covariance to compute the Mahalanobis distance. Thus, such covariance you are fixing manually should be logical according to the odometry errors.

Nice work ;) Eduardo

EduFdez commented 8 years ago

Hi @bergercookie

After your fix in the CMakelists.txt now I can compile and run your code, but I cannot see the visualization yet in the way I intended. My idea was to pause the execution and zoom in to have an idea of how many edges there are, their distribution and the loop closures. But once I pause and then zoom in I cannot resume the execution. Thus this also happen for you? Do you think this can be fixed easily?

I was also thinking about the map partitioning. Some years ago I worked on a more efficient way of doing this for large maps, so that only a vicinity region (containing the current submap and their 1-connected neighbors) is taken into account as local map for adding edges and for applying new partitions. That permitted to have a bounded cost on the partitioning, regardless the size of the full global map. Unfortunately I didn't add this to MRPT and this was complex to implement. I will not have access to that piece of code until September, then I would like to integrate it to MRPT and also to the graphslam-engine. I think this is the best option so that you don't have to concentrate on that part now.

A little remark, don't forget to update the readme.txt as you are adding new behaviors to your app.

Cheers,

bergercookie commented 8 years ago

On 31/07/16, Eduardo Fernandez-Moral wrote:

Hi @bergercookie

After your fix in the CMakelists.txt now I can compile and run your code, but I cannot see the visualization yet in the way I intended. My idea was to pause the execution and zoom in to have an idea of how many edges there are, their distribution and the loop closures. But once I pause and then zoom in I cannot resume the execution. Thus this also happen for you? Do you think this can be fixed easily? I think this is a bug of opengl's integration in mrpt. The problem arises after resizing, rotating or zooming the view in the CDisplayWindow. Before that, the keystrokes work as expected but after interacting with the window they stop working. As a quick fix what I normally do is press 'Alt+TAB' to change to the previous application and then Alt+TAB again to change back to the CDisplayWindow. Then the keystrokes work normally again, until the next time I rotate/zoom the view. Using this and pressing p (or P) in the window you could pause execution, investigate the procedure / graph-construction and then continue with the application execution by pressing p/P again.

You can also recreate this in other MRPT applications like graph-slam. Try running it with a generated .graph file and use the keystrokes mentioned in the help textBox. You 'll see that after interacting with the window, the keystrokes stop working.

I was also thinking about the map partitioning. Some years ago I worked on a more efficient way of doing this for large maps, so that only a vicinity region (containing the current submap and their 1-connected neighbors) is taken into account as local map for adding edges and for applying new partitions. That permitted to have a bounded cost on the partitioning, regardless the size of the full global map. Unfortunately I didn't add this to MRPT and this was complex to implement. I will not have access to that piece of code until September, then I would like to integrate it to MRPT and also to the graphslam-engine. I think this is the best option so that you don't have to concentrate on that part now. Sounds interesting. In the current implementation I just add a new MapFrame to the CIncrementalMapPartitioner instance every time a new node is registered and I issue a full map partitioning (clearing all the map frames and poses and recomputing the partitions from scratch) in fixed periods (e.g. every 50 registered nodes). However even using this incremental partitioning fashion, the cost of updating the map increases with the size of the map.

A little remark, don't forget to update the readme.txt as you are adding new behaviors to your app. I am planning to update it right after I implement the loop closing scheme.

Cheers,

With regards, Nikos

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub, or mute the thread.*

Nikos Koukis School of Mechanical Engineering National Technical University of Athens nickkouk@gmail.com +30 6985827375

jlblancoc commented 8 years ago

Yes, it's an old and hard to identify bug in opengl Windows of mrpt, sorry about that... the only workaround I know is focusing on another window then focus again on it.

I expended many hours debugging this in the past without success...it has to do with the chain of event handlers in wxwindows.

jlblancoc commented 8 years ago

@bergercookie Just letting you know I did some simplifications and changes to your output logger class within a separate branch of MRPT, with the intention of making it the replacement of CDebugOutputCapable. See: https://github.com/MRPT/mrpt/commit/fca2dd72affdba8381ad12073f78d17202692ae8

Totally low priority, but just to let you know.

bergercookie commented 8 years ago

Hi @EduFdez @jlblancoc,

During the past couple of days I came up with a working implementation of Olson's Loop closing strategy. More specifically the following steps are taken to identify potential loop closures and register the most consistent of the loop closure hypotheses:

With regards to the mathematics involved in the final step, I think that the equation for the pairwise consistency element that Olson mentions in the paper is wrong. Olson states that for hypotheses i,j the pairwise consistency element is given by the equation: A_{i,j} = e ^ {T * \Sigma_T ^ {-1} * T^T },

where T is the rigid body transformation derived from the composition of the two hypotheses and the two Dijkstra links and \Sigma_T is the covariance matrix of the aforementioned transformation. Ideally, if both hypotheses are true T should be [0, 0, 0]. As Olson states in the paper:

"This quantity is proportional to the probability density that the rigid-body transformation is the identity matrix (i.e., T = [0 0 0]). In other words, Eqn. 8 is a measure of how likely it is that the four links actually form a loop. If either hypothesis i or j is incorrect, the composition of the four links is will not be a loop. Alternatively, if both i and j are incorrect, it is possible that their errors will “cancel out”, but this is highly improbable. In short, the pairwise consistency Ai,j is generally small if one or both of i and j are incorrect."

However given the above explanation the equation doesn't hold.

Based on the previous I modified the equation at hand to: A_{i,j} = e ^ {-T * \Sigma_T * T^T }

Having made the aforementioned modifications to the Consistency element computation, the procedure works as expected. For edges that form bad loop closures (low goodness values, T ~= [0, 0, 0] etc.) consistency_element = 0) while for correct hypothesis pairs consistency_element = 1). Loop closing seems to work with all 3 of the datasets that I have uploaded to my MRPT fork. A sample command that you can run to monitor the loop closing procedure is given below:

make graphslam-engine && graphslam-engine -i $mrpt/share//mrpt/config_files/graphslam-engine/odometry_2DRangeScans_LC_version.ini -r $mrpt/share/mrpt/datasets/graphslam-engine-demos/action_observations_map/range_030_bearing_015.rawlog -g $mrpt/datasets/graphslam-engine-demos/action_observations_map/range_030_bearing_015.rawlog.GT.txt --node-reg CFixedIntervalsNRD --edge-reg CLoopCloserERD

Furthermore I worked on the following parts:

With regard to the previous points:

error: variable ‘mrpt::graphslam::GRAPHSLAM_IMPEXP
mrpt::graphslam::CEdgeCounter’ has initializer but incomplete type
 class GRAPHSLAM_IMPEXP CEdgeCounter {

During the next days I think that the following actions should be taken:

PS. I want to continue using and contributing to MRPT after GSoC. Not only I enjoy working on the project but I will be using it for completing my master thesis on multi-robot SLAM as well. Therefore any remaining bugs in the code (e.g. handling of RGB-D datasets) can be dealt with after the pull-request submission.

PPS. @jlblancoc Sorry for the late reply regarding the logging module. The latest pull request seems to be improving the COutputLogger functionality rather significantly. Since you already merged the pull-request to the main repo, I will just have to modify my calls to the COutputLogger class methods, prior to my pull-request, so that they abide to the new method names.

Looking forward to hearing from you, Nikos

jlblancoc commented 8 years ago

Hi Nikos!

Good progress! :-) I'll answer mainly to the second half of your mail, leaving the rest for later or to be answered by @EduFdez .

On GRAPHSLAM_IMPEXP: Each {LIB}_IMPEXP macro requires including this header:

#include <mrpt/{LIB}/link_pragmas.h>

Probably that's the only missing part there. Remember that only non-template class require the {LIB}_IMPEXP macro, though...

On the app page for the website... I hope we had already moved to a .rst-like system like OpenCV, etc. but we haven't yet, and the entire website documentation is based on WordPress (sigh). Write me a private email and I'll give you access details, ok? ;-)

PS. I want to continue using and contributing to MRPT after GSoC. Not only I enjoy working on the project but I will be using it for completing my master thesis on multi-robot SLAM as well. Therefore any remaining bugs in the code (e.g. handling of RGB-D datasets) can be dealt with after the pull-request submission.

Sure! No problem from our part :+1:

PPS. @jlblancoc Sorry for the late reply regarding the logging module. The latest pull request seems to be improving the COutputLogger functionality rather significantly. Since you already merged the pull-request to the main repo, I will just have to modify my calls to the COutputLogger class methods, prior to my pull-request, so that they abide to the new method names.

Yes, I'm afraid the easier way for you to avoid conflicts is to manually pick the current COutputLogger.{h,cpp} from git master, overwrite the files in your fork, and fix your code to adapt it. I wrote short notes explaining the reason for all the changes in the commit logs, if you are interested... I was really excited of having such a good base code as yours for such a logger module, soooo needed in many parts of MRPT, so I jumped into it and quickly modified it to better suite the needs of the rest of MRPT apps. You might want to see the wrapper class for the new *_STREAM interface, and the macros (see the few example code lines at the top of the dox page) for much compact notation in user code. Feel free to add any further modifications, of course!

EduFdez commented 8 years ago

Hi @bergercookie

Good job for the implementation of Olson's Loop closing strategy, and for finding the error in the formulation, I agree with you about the missing minus sign.

I have run the app on the datasets .../action_observations_map/basic.rawlog and .../action_observations_map/range_030_bearing_015.rawlog and both show that the loop closure is working well as you said.

I've seen that there is still a little zig-zag shape on the estimated trajectory, though it looks much better than before. Such zig-zag changes producing some acute corners (indicating a movement back and forth of the sensor which is wrong) after the loop closure optimization in the dataset "range_030_bearing_015". This indicates that there are still wrong edges on the graph, but you don't have to work anymore on this during this GSoC, I just tell you to take note of it in case you have problems with this issue in the future. I'm glad to hear that you are planning to work on SLAM and contribute to MRPT after GSoC, that's great :-)

I totally agree on your next steps:

Write the application page for graphslam-engine
Record youtube video showcasing the application usage
Format and make the pull-request

Please, give me the url to the application page when you make it.

Cheers,

bergercookie commented 8 years ago

On 12/08/16, Eduardo Fernandez-Moral wrote:

Hi @bergercookie

Good job for the implementation of Olson's Loop closing strategy, and for finding the error in the formulation, I agree with you about the missing minus sign.

I have run the app on the datasets .../action_observations_map/basic.rawlog and .../action_observations_map/range_030_bearing_015.rawlog and both show that the loop closure is working well as you said.

I've seen that there is still a little zig-zag shape on the estimated trajectory, though it looks much better than before. Such zig-zag changes producing some acute corners (indicating a movement back and forth of the sensor which is wrong) after the loop closure optimization in the dataset "range_030_bearing_015". This indicates that there are still wrong edges on the graph, but you don't have to work anymore on this during this GSoC, I just tell you to take note of it in case you have problems with this issue in the future. You are right on this. I tried filtering the wrong ICP edges in the CLoopCloserERD class but it it seems that it isn't as effective as it needs to be.

I'm glad to hear that you are planning to work on SLAM and contribute to MRPT after GSoC, that's great :-) I wouldn't miss it for the world. I love working on MRPT and collaborating with you guys. :)

I totally agree on your next steps:

Write the application page for graphslam-engine
Record youtube video showcasing the application usage
Format and make the pull-request

Please, give me the url to the application page when you make it.

Cheers,

Great, I'll let you know when I have the application page and the first draft of the video ready.

With regards, Nikos

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub, or mute the thread.*

Nikos Koukis School of Mechanical Engineering National Technical University of Athens nickkouk@gmail.com +30 6985827375

bergercookie commented 8 years ago

Dear @EduFdez, @jlblancoc,

I finished editing the youtube video. As I wanted to make the tutorial as simple and comprehensible as possible, I opted for using the simpler CICPCriteriaERD instead of the more effective CLoopCloserERD decider, which was based on Olson's scheme.

I also added the graphslam-engine application page in the MRPT website. The application page is heavily based on the corresponding man page that I had previously written.

Finally I formatted and posted the pull-request for the graphslam-engine lib/ and application files.

Looking forward to your feedback, Nikos

jlblancoc commented 8 years ago

Hi Nikos,

I probably will have many comments for you in the next days as I play a bit more with your code, but I wanted to CONGRATULATE you for the amazing work, as far as I have seen! :-)

On par with the difficulty of the task and the SLAM problem, was the difficulty of documenting what you've done, and I think you have devoted many hours to this, so a huge congrats for having such a good habit :+1:

jlblancoc commented 8 years ago

@bergercookie As @EduFdez said, the "zig-zag pattern" seems pretty serious and suspicious... perhaps it's not related at all with your code and it's a bug in other parts of MRPT, but one thing that could be done is the following (let us know if you could make a quick test to implement it and give it a try):

Cheers,

bergercookie commented 8 years ago

I probably will have many comments for you in the next days as I play a bit more with your code, but I wanted to CONGRATULATE you for the amazing work, as far as I have seen! :-)

Thanks for recognizing my efforts Jose. It should be noted that existing very well-written doxygen documentation in the rest of the MRPT classes/files helped significantly at the documentation stage.

@bergercookie As @EduFdez said, the "zig-zag pattern" seems pretty serious and suspicious... perhaps it's not related at all with your code and it's a bug in other parts of MRPT, but one thing that could be done is the following (let us know if you could make a quick test to implement it and give it a try):

I will test the zig-zag pattern as soon as possible and see the behavior with the G2O optimizer. However I think the zig-zag move is proportional to the added noise in the laser scans:

@jlblancoc With regards to the final evaluation form, I agree that the current issue should be provided, since it is the most appropriate link to describe the work product. I suggest that we add the following links to your initial comment as the final results of the project:

If you agree on the above, please let me know when you have updated the final results, so that I can fill in the GSoC evaluation form

With regards, Nikos

EduFdez commented 8 years ago

Hi @bergercookie

I agree with @jlblancoc to congratulate you for your excellent work! I think "graphslam-engine" is a great contribution for MRPT :-D

I find that the project is very well documented through the video/app's page/doxygen. Excellent! I will ask you to do some modifications on the app's page to promote the use and interest of graphslam-engine by other MRPT users. For that, I would reduce the content of the app's page to the strictly necessary for an app's user, plus basic information about the graph-slam approach (or app's structure). The rest of the content of this page I would include it in a .pdf file with more extent information about the project, which may be useful for future contributors to your app, or to evaluate the work done in this GSoC project. In this way, a potential user just see a well summarized page of the app he is intending to launch and use, while people more interested in the project/code structure can see more detailed information in the .pdf which should also be referred in the app's page.

Personally I prefer the style of other MRPT app's like: http://www.mrpt.org/list-of-mrpt-apps/application-icp-slam/ http://www.mrpt.org/list-of-mrpt-apps/application-difodometry-datasets/ to that of: http://www.mrpt.org/list-of-mrpt-apps/application-graphslamengine/ http://www.mrpt.org/list-of-mrpt-apps/application-graph-slam/ because they give more summarized information about the app and its usage. So, I would leave it as:

  1. Description (brief explanation, including the video as you did)
  2. Command arguments (including some of the content you placed in the sections DESCRIPTION and OPTIONS. the commands should be arranged by grade of importance like in your current description. So you mention and explain the arguments like the rawlog argument only once, instead of twice by having 2 sections: DESCRIPTION and OPTIONS).
  3. Demo (with the 3 command line launchs that you put in EXAMPLES, and with a line describing each of them i.e.: Run graphslam on a small 2D trajectory with laser and odometry: ./graphslam-engine ...[arguents] Run graphslam on a larger 2D trajectory with laser and odometry: ./graphslam-engine ...[arguents] etc.)
  4. App's structure (like in "application-difodometry-datasets -> Underlying classes" to point to the main classes of your project like node/edge registration deciders, loop closure,....). You may include a last sentence like: The following .pdf (make link) gives more information about this app and its guideline for future developers.

@jlblancoc @bergercookie we also need to think about the usefulness of both current graph-slam apps: graph-slam and graphslam-engine. I think only one app called graph-slam should remain: by recalling graphslam-engine to graph-slam and deleting graph-slam after a try period to test the first and to verify that all the features of graph-slam are in graphslam-engine. Maybe we could move the current and more basic graph-slam to the samples section, what do you think? Should we make a new MRPT issue to address this?

I was revising the proposal of this GSoC project and the timeline update you published in evernote and I am quite happy with what was done. @bergercookie I would like to ask you to create a list of future improvements for this app, which may be included in the pdf I mentioned above, which will serve as guide for future developers or yourself. I would include in this list the optional parts of the project that were mentioned in the original project proposal plus those new improvements that you can see now, you may list them by order of importance. My list would include:

  1. ROS integration
  2. RGB-D support
  3. Visual sensors support (working with point features as SIFT).
  4. Integration with other optimization tools: g2o ... anything else? maybe a list of 2D laser+odometry datasets to further test this application?

Regarding the zig-zag issue @jlblancoc mentioned I agree with @bergercookie . This issue of wrong edges occurs previous to do loop closure. So, it is not necessary to check other optimizers like g2o to debug this problem. But maybe it would be interesting to visualize the cause of this wrong edges so that this may help to avoid them. I remember that I once suggested to @bergercookie to code a degub experiment in which every time a new node was added it was compared to the odometry and if they are too much different the application should be paused and the ICP registration should be visualized with respect to the registration obtained using the groundtruth to see why the ICP was failing: maybe too much noise? or insufficient data?... If we can visualize this maybe we can correct this problem.

Cheers

jlblancoc commented 8 years ago

Hi Edu, and congrats to you too, by the way for the big work in this project! :-)

On merging graph-slam & graphslam-engine: graph-slam is fundamentally an offline optimizer, more like g2o / toro / ... so it's rather different than the new app. It may be possible to merge it with the new one, though, but it would imply a different set of arguments to the new program, and code to decide whether it's invoked for online vs. offline. What are your thoughts about doing the merge, after reading about this? I'm open to both ideas: merging them and keeping both (probably renaming one or both of the apps).

On the zig-zag: Ok, after your explanation I agree in that the problem may be in the edges, not in the optimizer(s)... 0.30m for LiDAR noise is far too high! More typical values are more like 0.01 to 0.03. Anyway, this opens an easy add-on that you could add to the code, Nikos, to be helpful in these situations: a new parameter (in the registrant where it makes more sense) to "multiply the uncertainty" of ICP alignment results. Because the problem now is that ICP covariance is overconfident, so we need to multiply the covariance with a factor >1 (or the information matrix with a factor <1) to make it less overconfident and rely a bit more on the odometry. That new parameter could be added to the config file, and increasing it should decrease the "amplitude" of the zig-zag, that's my bet ;-)

Cheers,

bergercookie commented 8 years ago

I will ask you to do some modifications on the app's page to promote the use and interest of graphslam-engine by other MRPT users. For that, I would reduce the content of the app's page to the strictly necessary for an app's user, plus basic information about the graph-slam approach (or app's structure). The rest of the content of this page I would include it in a .pdf file with more extent information about the project, which may be useful for future contributors to your app, or to evaluate the work done in this GSoC project. In this way, a potential user just see a well summarized page of the app he is intending to launch and use, while people more interested in the project/code structure can see more detailed information in the .pdf which should also be referred in the app's page.

Yes, this sounds like a better setup than the current one.

@jlblancoc @bergercookie we also need to think about the usefulness of both current graph-slam apps: graph-slam and graphslam-engine. I think only one app called graph-slam should remain: by recalling graphslam-engine to graph-slam and deleting graph-slam after a try period to test the first and to verify that all the features of graph-slam are in graphslam-engine. Maybe we could move the current and more basic graph-slam to the samples section, what do you think? Should we make a new MRPT issue to address this? I 'll have to agree with @jlblanco on this. graph-slam is an offline graph optimizer/viewer which supports .graph files as input while graphslam-engine has a different functionality. It would be possible to merge the two application but that implies that the current graphslam-engine would support the graph-slam application cmd-line arguments as well which would make it more complex and harder to use. I think the given names (graph-slam, graphslam-engine) are pretty explanatory of their usage and with regards to the graphslam-engine it's pretty consistent with the naming of the library files as well.

was revising the proposal of this GSoC project and the timeline update you published in evernote and I am quite happy with what was done. @bergercookie I would like to ask you to create a list of future improvements for this app, which may be included in the pdf I mentioned above, which will serve as guide for future developers or yourself. I would include in this list the optional parts of the project that were mentioned in the original project proposal plus those new improvements that you can see now, you may list them by order of importance. My list would include:

  1. ROS integration
  2. RGB-D support
  3. Visual sensors support (working with point features as SIFT).
  4. Integration with other optimization tools: g2o ... anything else? maybe a list of 2D laser+odometry datasets to further test this application?

Yes, that's an important subject to discuss as well. In the following months I will be using the current graphslam-engine setup to work on my master thesis as well. Just to be on the same page on this, the master thesis is about multi-robot SLAM. Furthermore the majority of the people in my laboratory use ROS for the inter/intra robot communication. So I would suggest the following course of action for further developement in graphslam-engine

On the zig-zag: Ok, after your explanation I agree in that the problem may be in the edges, not in the optimizer(s)... 0.30m for LiDAR noise is far too high! More typical values are more like 0.01 to 0.03. Anyway, this opens an easy add-on that you could add to the code, Nikos, to be helpful in these situations: a new parameter (in the registrant where it makes more sense) to "multiply the uncertainty" of ICP alignment results. Because the problem now is that ICP covariance is overconfident, so we need to multiply the covariance with a factor >1 (or the information matrix with a factor <1) to make it less overconfident and rely a bit more on the odometry. That new parameter could be added to the config file, and increasing it should decrease the "amplitude" of the zig-zag, that's my bet ;-)

Ok, that should work, but how would the application know when the LiDAR noise is high or low? Until now, the following solutions have been analyzed:

jlblancoc commented 8 years ago

Just a minor issue that I noticed today: The contents of share/mrpt/datasets/graphslam-engine-demos are really "heavy" (>90MB !!), whereas the rest of datasets in that directory are ~3 MB or so, since they are intended to be "example" datasets for small tests and demos. Larger datasets could be uploaded somewhere and linked in a .txt.

@bergercookie do you think you could easily cut them? I could do it, but since you are the original author I guessed it would be more appropriate ;-) Two possibilities are "decimating" them or cutting them (both options are easy from RawLogViewer). Let me know if you find any problem in the process...

bergercookie commented 8 years ago

Ok @jlblanco, I'll take care of it as soon as possible. :) Just as a reminder, would it be possible to update the "final results section" of the initial comment so that I can refer to this Github issue in the GSoC final evaluation?

jlblancoc commented 8 years ago

Done! Sorry for the delay ;-)

EduFdez commented 8 years ago

Hi @bergercookie and @jlblancoc

@jlblancoc I think that merging graph-slam & graphslam-engine is an option (that's what I meant about keeping only one). But since the current graph-slam is just an offline optimizer I think that it will be more interesting to rename it and leave it as it is (either in apps or samples). I think it will be clearer like this with only one graph-slam application.

@bergercookie I am afraid that I cannot guide you properly on the ROS integration, I've used ROS but I haven't almost coded with it and it was a long time ago. Maybe @jlblancoc can answer your question above.

@bergercookie Regarding the list of future improvements for this app, my point is that there is a record after this GSoC about how to continue this project. That's why I asked you to write it in the pdf where you can put information about your app which is not the the app's page. It's great that you will address some of these in your master thesis, while the others may be addressed by other MRPT contributors interested on graph-slam. All in all I agree with the issue of Node Reduction, maybe it can be applied together with the sub-mapping strategy of the graph-cut which is already working. I will have a look at the papers you mentioned and I'll tell you my thoughts.

Ok, that should work, but how would the application know when the LiDAR noise is high or low?

It's hard to say... if you know exact LiDAR device then you should have access to the specifications sheet where the error given normally correspond (more or less) to the error's standard deviation. If it's the case of a synthetic dataset, then you know exactly what's your error model. Since most of the datasets avaliable are quite accurate: error's sigma ~0.01m; you may try to add a synthetic extra error on top when you are interested on analyzing the behaviour of bigger errors. There is an important difference here between synthetic and real datasets, because the last may have a very different error model of the one provided as it may depend on the experimental conditions, etc. Relatively speaking, problems will only arise when the model is overconfident (not the other way around). You may notice that your model is overconfident if the results of your experiment don't fit what you expect, like in this zig-zag problem. I think that @jlblancoc idea about multiplying the covariance by a factor is a great idea since it can be programmed easily and you can test if the zig-zag comes from overconfident LiDAR model by reducing progressively this confidence through the factor.

PS. Since you are planning to continue contributing to this project after this GSoC, I will love to continue helping you on it as well, and I think the same applies to @jlblancoc , should we still communicate through this thread? or should we open a new one after the GSoC?

Cheers

bergercookie commented 8 years ago

On 19/08/16, Eduardo Fernandez-Moral wrote:

It's hard to say... if you know exact LiDAR device then you should have access to the specifications sheet where the error given normally correspond (more or less) to the error's standard deviation. If it's the case of a synthetic dataset, then you know exactly what's your error model. Since most of the datasets avaliable are quite accurate: error's sigma ~0.01m; you may try to add a synthetic extra error on top when you are interested on analyzing the behaviour of bigger errors. There is an important difference here between synthetic and real datasets, because the last may have a very different error model of the one provided as it may depend on the experimental conditions, etc. Problems will only arise when the model is overconfident (not the other way around). You may notice that your model is overconfident if the results of your experiment don't fit what you expect, like in this zig-zag problem. I think that @jlblancoc idea about multiplying the covariance by a factor is a great idea since it can be programmed easily and you can test if the zig-zag comes from overconfident LiDAR model by reducing progressively this confidence through the factor.

Ok, this makes sense. I could program it and see if the zig-zag is smoothened.

PS. Since you are planning to continue contributing to this project after this GSoC, I will love to continue helping you on it as well, and I think the same applies to @jlblancoc , should we still communicate through this thread? or should we open a new one after the GSoC?

Of course! I'd love it if you and @jlblancoc could continue helping me out through the rest of this project. I could open up 3, 4 "enhancement" issues each devoted to one of the feature previously mentioned. This way the communication can be more accurate and the threads shorter.

Cheers

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub, or mute the thread.*

Nikos Koukis School of Mechanical Engineering National Technical University of Athens nickkouk@gmail.com +30 6985827375

jlblancoc commented 8 years ago

Sure! Please, open new issue tickets for each task and I'll be glad of keeping alive our collaboration in this project ;-)

EduFdez commented 8 years ago

Hi @bergercookie

I have to complete the final evaluation of this GSoC and I want to do a final review of the app and its documentation. I think that the changes I asked you to apply on the app's page (and the pdf) should be part of this GSoC, and I wanted to ask you to do them before I submit the evaluation. Is that OK?

bergercookie commented 8 years ago

Hi @EduFdez,

I think that the changes I asked you to apply on the app's page (and the pdf) should be part of this GSoC, and I wanted to ask you to do them before I submit the evaluation. Is that OK?

Absolutely! I am going to deal with this today. Do you want me make a seperate pull-request for this change or submit it with the rest of the graphslam-engine improvements mentioned in https://github.com/MRPT/mrpt/issues/318?

EduFdez commented 8 years ago

Hi @bergercookie

You can do it in a separate pull request which may serve as a landmark for the end of this GSoC. This pull request should contain the pdf file. The app's page that you must modify as well is out of the scope of the pull request because you will edit it outside MRPT sources (through wordpress).

jlblancoc commented 8 years ago

@bergercookie : I wanted to express my sincere congratulations for your work in this project... very well done, you have shown to be a very prolific programmer and, very importantly, to also work on the theoretical aspects of SLAM, not only the coding stuff ;-)

@EduFdez : I'm unsure about there are still some missing points in this TO-DO list (there is a new completely new one here) that should prevent officially marking this issue as "closed"??

EduFdez commented 8 years ago

@bergercookie and @jlblancoc : Yes, we should mark this issue as closed.

@bergercookie Congratulations once again! Your work was excellent. I hope you enjoyed it and you learnt lots of valuable stuff :-)

@jlblancoc : thank you for making MRPT one of the organizations in this GSoC, I know it was a lot of work.

Happy end of summer to both of you ;-)

jlblancoc commented 8 years ago

Thanks! :+1: I also wish to both of you a happy and fun end of summer!

bergercookie commented 8 years ago

@jlblancoc @EduFdez thanks again for the opportunity of working in the MRPT project. I enjoyed every minute of the previous 3 months coding. Git, C++, MRPT, SLAM, I think I learned quite some stuff :) I will continue working on graphslam-engine and MRPT in general and I look forward to collaborating with you in the projects to come. :)

Cheers, Nikos