SteveMacenski / slam_toolbox

Slam Toolbox for lifelong mapping and localization in potentially massive maps with ROS
GNU Lesser General Public License v2.1
1.57k stars 505 forks source link

Performance degradation with low field of view lidar #239

Open Enri2077 opened 4 years ago

Enri2077 commented 4 years ago

Required Info:

Steps to reproduce issue

Run slam_toolbox in simulation using Gazebo 9 and a model similar to the Turtlebot 3 waffle, but lower field of view (fov). I can provide the rosbags for every run I used to establish the performance degradation, more on this later.

Expected behavior

Mapping performance degrades with lower field of view for the lidar, but still results in a decent map.

Actual behavior

With a low field of view lidar sensor (90°, pointing forward) localization and mapping performance is so low that slam_toolbox is not able to reconstruct the map, even with ideal odometry.

Example of map with 90° deg fov:

slam_toolbox map

And the ground truth: ground truth map

Additional information

I'm working on benchmarking the performance of slam and localization algorithms, specifically gmapping, amcl and slam_toolbox. I do this by simulating the robot navigating throughout the environment in Gazebo, and execute multiple runs with different components and with different combinations of parameters such as the field of view of the lidar.

The performance is surprisingly degraded with low fov, to the point I thought of opening this issue and asking if maybe I should try a different configuration than the default.

Here is some data for the slam benchmark:

slam_node laser_scan_fov_deg min avg max
slam_toolbox, CauchyLoss 90 0.448 2.087 3.582
slam_toolbox, CauchyLoss 180 0.074 0.102 0.136
slam_toolbox, CauchyLoss 270 0.041 0.054 0.074
slam_toolbox, CauchyLoss 359 0.051 0.052 0.053
--- ---
slam_toolbox, HuberLoss 90 0.571 1.439 2.597
slam_toolbox, HuberLoss 180 0.088 0.181 0.395
slam_toolbox, HuberLoss 270 0.043 0.541 2.517
slam_toolbox, HuberLoss 359 0.043 0.053 0.066
--- ---
slam_toolbox, Squared loss 90 1.164 3.556 5.503
slam_toolbox, Squared loss 180 0.391 1.290 3.358
slam_toolbox, Squared loss 270 0.314 0.418 0.536
slam_toolbox, Squared loss 359 0.048 0.104 0.187
--- ---
gmapping 90 0.098 0.159 0.220
gmapping 180 0.014 0.048 0.081
gmapping 270 0.008 0.035 0.049
gmapping 359 0.007 0.025 0.048

The values min, avg, max refer to the minimum, mean and maximum of the localization error (in meters) measured in 5 different runs. The localization error is simply the mean of the Euclidean distance between estimated position and ground truth position sampled at 1Hz throughout the run. I have computed other metrics, but I think this one gives a good idea of the distortion of the map.

The lidar max range is 30m and the odometry is ideal (no odometry error). I have data collected with different max range and odometry error values, but my problem is currently with the field of view. All this data is collected in one environment. I'm going to collect data in other environments shortly.

As you can see, gmapping performs decently for all fov values, while the performance of slam_toolbox is much more affected by the field of view. Interestingly, it looks like CauchyLoss is the best choice for this setup (at least with this specific environment), but even using CauchyLoss I'm not able to obtain a good map with the 90° fov lidar.

Here is the configuration I have used for slam_toolbox (in the case of 90° fov, specifically):

# Plugin params
solver_plugin: solver_plugins::CeresSolver
ceres_linear_solver: SPARSE_NORMAL_CHOLESKY
ceres_preconditioner: SCHUR_JACOBI
ceres_trust_strategy: LEVENBERG_MARQUARDT
ceres_dogleg_type: TRADITIONAL_DOGLEG
ceres_loss_function: HuberLoss  # default None

# ROS Parameters
odom_frame: odom_realistic
map_frame: map
base_frame: base_footprint_realistic
scan_topic: /scan
mode: mapping

debug_logging: false
throttle_scans: 1
transform_publish_period: 0.02  # if 0 never publishes odometry
map_update_interval: 5.0
resolution: 0.05
max_laser_range: 20.0  # for rastering images
minimum_time_interval: 0.5
transform_timeout: 0.2
tf_buffer_duration: 30.
stack_size_to_use: 40000000  # program needs a larger stack size to serialize large maps
enable_interactive_mode: false

# General Parameters
use_scan_matching: true
use_scan_barycenter: true
minimum_travel_distance: 0.2  # default 0.5
minimum_travel_heading: 0.1  # default 0.5
scan_buffer_size: 10
scan_buffer_maximum_scan_distance: 10
link_match_minimum_response_fine: 0.1  
link_scan_maximum_distance: 1.5
loop_search_maximum_distance: 3.0
do_loop_closing: true 
loop_match_minimum_chain_size: 10           
loop_match_maximum_variance_coarse: 3.0  
loop_match_minimum_response_coarse: 0.35    
loop_match_minimum_response_fine: 0.45

# Correlation Parameters - Correlation Parameters
correlation_search_space_dimension: 0.5
correlation_search_space_resolution: 0.01
correlation_search_space_smear_deviation: 0.1 

# Correlation Parameters - Loop Closure Parameters
loop_search_space_dimension: 8.0
loop_search_space_resolution: 0.05
loop_search_space_smear_deviation: 0.03

# Scan Matcher Parameters
distance_variance_penalty: 0.5      
angle_variance_penalty: 1.0    

fine_search_angle_offset: 0.00349     
coarse_search_angle_offset: 0.349   
coarse_angle_resolution: 0.0349        
minimum_angle_penalty: 0.9
minimum_distance_penalty: 0.5
use_response_expansion: true

ceres_loss_function and max_laser_range are set to different values in each run. I decreased minimum_travel_distance and minimum_travel_heading. I'm executing the node async_slam_toolbox_node.

I can provide all bags, the docker image in which I run the experiments, and all the data from the runs if it helps. In any case the code for building the Docker image and the execution of the benchmark is here: https://github.com/Enri2077/slam_performance_modelling.git and the environment data is here: https://github.com/Enri2077/performance_modelling_test_datasets.git (the environment airlab is the one used to collect this data)

Do you think this performance degradation is unavoidable or it may be possible to find a different configuration that performs better with all these sensors (low/high fov, low/high max range, etc)? I would like to have the best configuration before proceeding to benchmark your component.

SteveMacenski commented 4 years ago

maybe I should try a different configuration than the default.

Probably yes

Your simulation environment is also really poor for most SLAM algorithms with large areas without any features (perfect right angles, no objects in any given 90deg fov, etc) so you might want to test in a real representative dataset or make a more realistic modeled space.

You can even see this in your own data when looking at the top of the map being better aligned than the bottom because there’s some somewhat (but still poor) unique features to grab onto for relative scan matching. This benchmark doesn’t seem fair. This is an overwhelmingly well documented feature of matching based SLAM algorithms and even within the ROS community bunches of ROS Answers questions on that shifting you see in Karto, Slam Toolbox, and even gmapping in featureless simulation environments. For reference, even a real-world “box world” will naturally have enough features from door frames, dents in the walls, maybe a couple chairs won’t show that drifting issue. That’s solely a simulation artifact if the world designer didn’t add enough “texture” to seem real.

That’s why you see that massive jump from 270 to 360, even though that 360 number is what you also see with 170 deg lidars with good simulation environments & real world analysis. About 5cm is correct (well, depends on resolution of the map and lidar quality). That drifting you see would never happen in a featured environment or real world space. It wouldnt surprise me if 90 deg wasnt as good as 170+ but what you’ve shown / described as a result primarily of your experimental setup.

SteveMacenski commented 4 years ago

Try even just using the turtlebot3 simulation playground for your benchmarks. Still small and about the same size as yours but has some features in it (e.g. this world https://navigation.ros.org/getting_started/index.html but for ROS1).

SteveMacenski commented 4 years ago

Let me know if I'm over-explaining and you know all this.

Typically gmapping is not used by industry or even really academia anymore (15 years old) for a few reasons. It's still a good benchmark for toy cases because the use is in a small environment for a limited duration and works well in that scope.

Typically industry uses some graph-based SLAM. This is one option of one, also all the 3D LIDAR SLAM systems will use graphs as the structure because they scale well and allow you to complete complex loop closures. Gmapping is based on a particle filter and the more updates between a potential loop closure, the less likely it will actually loop-close correctly. This will cause no loop closures to be found and then lots of deconstruction of the map. You see this in warehouses with long aisles, retail store aisles, autonomous cars going around a city block, etc. This is the largest down fall of particle filter based approaches. The strength of software like this isn't only in the accuracy of the map and positioning you can get, but the scale of maps it can support in real-time.

In the early days of a startup I worked at, we used gmapping to try to map 120,000 sqft spaces and it would easily take 10 hours because we'd have to very carefully watch the particle cloud and back out of divergences or else the mapping would fail and we'd usually have to restart multiple times. This work was specifically made to end those issues and since, we've scaled well above 250,000 sqft with this package in real-time with very high quality maps without restarting with untrained staff. This process takes about 45 minutes.

tl;dr from my comments:

Is this for a paper or something? Or just some internal R&D for a company? (if its a paper, I should give you a citation to use)

Enri2077 commented 4 years ago

Thanks for the thorough reply and no worries about over-explaining, all info are welcome.

Your simulation environment is also really poor for most SLAM algorithms with large areas without any features (perfect right angles, no objects in any given 90deg fov, etc) so you might want to test in a real representative dataset or make a more realistic modeled space.

You can even see this in your own data when looking at the top of the map being better aligned than the bottom because there’s some somewhat (but still poor) unique features to grab onto for relative scan matching. This benchmark doesn’t seem fair. This is an overwhelmingly well documented feature of matching based SLAM algorithms and even within the ROS community bunches of ROS Answers questions on that shifting you see in Karto, Slam Toolbox, and even gmapping in featureless simulation environments. For reference, even a real-world “box world” will naturally have enough features from door frames, dents in the walls, maybe a couple chairs won’t show that drifting issue. That’s solely a simulation artifact if the world designer didn’t add enough “texture” to seem real.

I agree these environments are very simple, we plan to add more with different degrees of cluttering eventually.

That’s why you see that massive jump from 270 to 360, even though that 360 number is what you also see with 170 deg lidars with good simulation environments & real world analysis. About 5cm is correct (well, depends on resolution of the map and lidar quality). That drifting you see would never happen in a featured environment or real world space. It wouldnt surprise me if 90 deg wasnt as good as 170+ but what you’ve shown / described as a result primarily of your experimental setup.

Try even just using the turtlebot3 simulation playground for your benchmarks. Still small and about the same size as yours but has some features in it (e.g. this world https://navigation.ros.org/getting_started/index.html but for ROS1).

I included turtlebot3_world in the environments and run the benchmark, this time with the more interesting range of fov=[90, 100, 140, 180]. The results are still based on 5 runs for each parameter combination.

ceres_loss_function laser_scan_fov_deg min avg max
CauchyLoss 90 0.241 0.621 1.332
100 0.090 0.553 1.734
140 0.092 0.263 0.833
180 0.048 0.083 0.116
--- ---
HuberLoss 90 0.589 1.276 1.903
100 0.105 0.847 2.404
140 0.085 0.136 0.264
180 0.067 0.099 0.125
--- ---
None 90 0.083 0.696 1.866
100 0.064 0.528 1.338
140 0.053 0.072 0.119
180 0.042 0.057 0.075

From an environment with this many features I would have expected a better performance. Even the map from the run with the lowest error (0.241m) seems to have alignment problems (of the ones with 90 deg fov and CauchyLoss): map gif from run 42

Since turtlebot3_world is quite self-similar, I also run again in the environment from before that I modified to add features and make it not self-similar. In this run I also removed the noise of the lidar (before it was set to 1cm std deviation, the default in the gazebo model). Unfortunately the performance is still bad.

New ground truth map: airlab_2 gt map

Result: airlab_2 map gif

Do you think this new version of the environment should have had enough features to guarantee a good localization/map? In that case, could you suggest which parameters may improve the performance with low fov?

Typically industry uses some graph-based SLAM. This is one option of one, also all the 3D LIDAR SLAM systems will use graphs as the structure because they scale well and allow you to complete complex loop closures. Gmapping is based on a particle filter and the more updates between a potential loop closure, the less likely it will actually loop-close correctly. This will cause no loop closures to be found and then lots of deconstruction of the map. You see this in warehouses with long aisles, retail store aisles, autonomous cars going around a city block, etc. This is the largest down fall of particle filter based approaches. The strength of software like this isn't only in the accuracy of the map and positioning you can get, but the scale of maps it can support in real-time.

We plan to include metrics that consider the size of the environment and the length of the loops. In fact one result we are trying to get from these benchmarks is to identify the limits of different algorithms/approaches given different sensors and environment characteristics.

Is this for a paper or something? Or just some internal R&D for a company? (if its a paper, I should give you a citation to use)

Yes, this will eventually become a paper, but more in general I'm doing research on benchmarking of robot systems.

Enri2077 commented 4 years ago

Bonus graph: average of mean_absolute_error for the combinations of loss function and field of view plot

SteveMacenski commented 4 years ago

we plan to add more with different degrees of cluttering eventually.

In my opinion, the experiments with an empty simulation room aren't really valid, since they don't represent a realistic environment with textures required for SLAM.

It doesn't surprise me you see bad performance with this (and karto, maybe cartographer too from how it works) with a 90 degree scanner. Matcher approaches are going to require enough frame-to-frame correlations to make a map of a space. Usually the number you're looking for is 30% overlap and 70% new space (where that 30% overlap needs some type of features to utilize). If you don't see that, then consider increasing the update distance / orientation change parameters:

minimum_travel_distance: 0.5 # I dont know, try 0.15?
minimum_travel_heading: 0.5 # I dont know, try 0.2?

That might help with the 90 degree part (more CPU though). Overall for that low-visibility in a small space, I would expect gmapping to probably beat this work. Matcher based approaches aren't really suited for low-visibility. They're used most frequent in industry, amongst other reasons, with sensors that have larger visibility (>=180 deg). Ignoring the HuberLoss results since they are really different from the other two, I think you could still do better with the other 2 with some parameter tuning (though the best performance will be ~270-360). The ones I mentioned above should help, but this block would also help. They dictate how the matching and loop search is done. They're tuned for industrial applications with good scan-to-scan matches so if you have lower visibility, you may need to increase the deviations and decrease the penalties to encourage it to try a bit more to match effectively. The resolutions and dimensions also might help but I'm not sure off the top of my head.

# Correlation Parameters - Correlation Parameters
correlation_search_space_dimension: 0.5
correlation_search_space_resolution: 0.01
correlation_search_space_smear_deviation: 0.1 

# Correlation Parameters - Loop Closure Parameters
loop_search_space_dimension: 8.0
loop_search_space_resolution: 0.05
loop_search_space_smear_deviation: 0.03
loop_search_maximum_distance: 5.0

# Scan Matcher Parameters
distance_variance_penalty: 0.5      
angle_variance_penalty: 1.0    

fine_search_angle_offset: 0.00349     
coarse_search_angle_offset: 0.349   
coarse_angle_resolution: 0.0349        
minimum_angle_penalty: 0.9
minimum_distance_penalty: 0.5

Do you think this new version of the environment should have had enough features to guarantee a good localization/map?

Frankly I'm not totally sure, I don't make simulation environments, I have access to tons of large-environment live data. I think you need to have some objects in the actual room and not all just circles. Edges are good. Things not axially aligned are good. Things that can be invisibility between 2 successive frames with low visibility would be good.

AWS has some worlds they've made with a bunch of stuff, probably better for your benchmark / paper anyhow.

Yes, this will eventually become a paper, but more in general I'm doing research on benchmarking of robot systems.

OK, I'll work on making this work citable then. I was planning on writing some papers on this but haven't gotten to it yet (too busy with papers on navigation2). I can make it citable pretty quickly though journal of open source software, but I'll likely publish a better paper down the road with more actual technical content

SteveMacenski commented 4 years ago

FYI - see a bunch of PRs I just added, if you weren't building with release builds, this will give you about a 50% speed up

imstevenpmwyca commented 3 years ago

Hello @SteveMacenski,

OK, I'll work on making this work citable then. I was planning on writing some papers on this but haven't gotten to it yet (too busy with papers on navigation2).

Do you have any updates regarding this? It would be useful to have some technical content of slam_toolbox (especially because I will be working with it -and contributing- for the next following months).

SteveMacenski commented 3 years ago

The paper is in review - which as you may have experience with can take a long time. In the meantime, you can cite my ROSCon 2019 talk (https://scholar.google.com/scholar?oi=bibs&hl=en&cites=15746430258596423963)

S.  Macenski,  “On  Use  of  the  SLAM  Toolbox:  A  Fresh(er)  look  at  Mapping and Localization for the Dynamic World,” in ROSCon, 2019.

Let me know what your planned contributions are! I have a small backlog of tickets that I need to get to (dropped the ball on this project at the end of 2020 due to other issues I plan on picking up in the coming weeks) but would be nice to know what to expect coming.

imstevenpmwyca commented 3 years ago

Dear @SteveMacenski, Thanks for the quick reply, I'm looking forward to reading the paper when it comes out. I'm planning to use slam_toolbox mainly to improve the localization and mapping capabilities of an industrial mobile robot. Specifically, its performance when working in changing/dynamic environments and in a long-term operation. I would like to know the road plan of improvements you have thought about so I have a better intuition of what is missing and what I can contribute with. I'm not a complete expert but I'm familiarized with pose-graph SLAM and ROS, and I will be working on the project for 6 months. I'm also not sure if this is the place to discuss this, we can create a new thread if you prefer. Thanks for your consideration!

SteveMacenski commented 3 years ago

A new thread would probably be good. I might suggest filing a new issue with some details about what you're looking to accomplish/add/fix, what you'd like from me, and maybe a rough roadmap of your plans.

imstevenpmwyca commented 3 years ago

Sure, I will update as soon as we finish the comparison between our current solution and slam_toolbox.

arad2456 commented 3 months ago

I have a similar problem. I use ROS 2 humble on Ubuntu 22.04.4 LTS

I ran a simulation in Gazebo to try SLAM, and it worked great with a lidar with a FoV of 360°. However, after I reduced the angle to fit the lidar available to me, which has a FoV of 70°, the performance degraded significantly. Here (#239), you recommended reducing the values of the parameters _minimum_traveldistance and _minimum_travelheading in the configuration file and using an environment with many instances to improve performance. I reduced the value of the parameters to the values you recommneded and much lower, but the performance wasn't as good as before. As for the world, I used the Turtlebot house, which, I suppose, fulfills the world requirements. I made a round around the house, and the results are really bad. The world was mapped perfectly with the 360° beam, but the results with the 70° beam are disappointing: Results of SLAM with a 70° FoV lidar

This is the world that should be mapped. I walked around the table in the main room, then went to the other room to the right, circled the table there, and returned to the initial position: The world to be mapped

I would be grateful for your help and any recommendations on how to improve performance.

This is my urdf file:

<?xml version="1.0"?>
<robot name="Lafette" xmlns:xacro="http://ros.org/wiki/xacro">

  <!-- Define robot constants -->
  <xacro:property name="base_width" value="0.4"/>
  <xacro:property name="base_length" value="0.6"/>
  <xacro:property name="base_height" value="0.2"/>

  <xacro:property name="lafette_mass" value="5"/>
  <xacro:property name="wheel_mass" value="0.5"/>

  <xacro:property name="wheel_radius" value="0.1025"/>
  <xacro:property name="wheel_width" value="0.075"/>
  <xacro:property name="wheel_zoff" value="${base_height / 2}"/>
  <xacro:property name="wheel_yoff" value="${base_width / 2}"/>
  <xacro:property name="wheel_xoff" value="${base_length / 2}"/>

<!-- Define intertial property macros  -->
  <xacro:macro name="box_inertia" params="m w h d">
    <inertial>
      <origin xyz="0 0 0" rpy="${pi/2} 0 ${pi/2}"/>
      <mass value="${m}"/>
      <inertia ixx="${(m/12) * (h*h + d*d)}" ixy="0.0" ixz="0.0" iyy="${(m/12) * (w*w + d*d)}" iyz="0.0" izz="${(m/12) * (w*w + h*h)}"/>
    </inertial>
  </xacro:macro>

  <xacro:macro name="cylinder_inertia" params="m r h">
    <inertial>
      <origin xyz="0 0 0" rpy="${pi/2} 0 0" />
      <mass value="${m}"/>
      <inertia ixx="${(m/12) * (3*r*r + h*h)}" ixy = "0" ixz = "0" iyy="${(m/12) * (3*r*r + h*h)}" iyz = "0" izz="${(m/2) * (r*r)}"/>
    </inertial>
  </xacro:macro>

  <xacro:macro name="sphere_inertia" params="m r">
    <inertial>
      <mass value="${m}"/>
      <inertia ixx="${(2/5) * m * (r*r)}" ixy="0.0" ixz="0.0" iyy="${(2/5) * m * (r*r)}" iyz="0.0" izz="${(2/5) * m * (r*r)}"/>
    </inertial>
  </xacro:macro>

  <!-- Robot Base -->
  <link name="base_link">
    <visual>
      <origin xyz="0.0 0.0 ${(wheel_radius+wheel_zoff)}" rpy="0 0 0"/>
      <geometry>
        <box size="${base_length} ${base_width} ${base_height}"/>
      </geometry>
      <material name="Cyan">
        <color rgba="0 1.0 1.0 1.0"/>
      </material>
    </visual>

    <collision>
      <origin xyz="0.0 0.0 ${(wheel_radius+wheel_zoff)}" rpy="0 0 0"/>
      <geometry>
        <box size="${base_length} ${base_width} ${base_height}"/>
      </geometry>
    </collision>

    <xacro:box_inertia m="${lafette_mass}" w="${base_width}" d="${base_length}" h="${base_height}"/>

  </link>

    <!-- Robot Footprint (non physical link for ROS, we need to bring the CoG of the robot to the cntral plane on the ground) -->
  <link name="base_footprint"/>

  <joint name="base_joint" type="fixed">
    <parent link="base_link"/>
    <child link="base_footprint"/>
    <origin xyz="0.0 0.0 ${(wheel_radius+wheel_zoff)}" rpy="0 0 0"/>
  </joint>

<link name="imu_link">
  <visual>
    <geometry>
      <box size="0.1 0.1 0.1"/>
    </geometry>
  </visual>

  <collision>
    <geometry>
      <box size="0.1 0.1 0.1"/>
    </geometry>
  </collision>

  <xacro:box_inertia m="0.1" w="0.1" d="0.1" h="0.1"/>
</link>

<joint name="imu_joint" type="fixed">
  <parent link="base_link"/>
  <child link="imu_link"/>
  <origin xyz="0 -0.1 ${wheel_radius + base_height + 0.1/2}"/>
</joint>

 <gazebo reference="imu_link">
  <sensor name="imu_sensor" type="imu">
   <plugin filename="libgazebo_ros_imu_sensor.so" name="imu_plugin">
      <ros>
        <namespace>/demo</namespace>
        <remapping>~/out:=imu</remapping>
      </ros>
      <initial_orientation_as_reference>false</initial_orientation_as_reference>
    </plugin>
    <always_on>true</always_on>
    <update_rate>100</update_rate>
    <visualize>true</visualize>
    <imu>
      <angular_velocity>
        <x>
          <noise type="gaussian">
            <mean>0.0</mean>
            <stddev>2e-4</stddev>
            <bias_mean>0.0000075</bias_mean>
            <bias_stddev>0.0000008</bias_stddev>
          </noise>
        </x>
        <y>
          <noise type="gaussian">
            <mean>0.0</mean>
            <stddev>2e-4</stddev>
            <bias_mean>0.0000075</bias_mean>
            <bias_stddev>0.0000008</bias_stddev>
          </noise>
        </y>
        <z>
          <noise type="gaussian">
            <mean>0.0</mean>
            <stddev>2e-4</stddev>
            <bias_mean>0.0000075</bias_mean>
            <bias_stddev>0.0000008</bias_stddev>
          </noise>
        </z>
      </angular_velocity>
      <linear_acceleration>
        <x>
          <noise type="gaussian">
            <mean>0.0</mean>
            <stddev>1.7e-2</stddev>
            <bias_mean>0.1</bias_mean>
            <bias_stddev>0.001</bias_stddev>
          </noise>
        </x>
        <y>
          <noise type="gaussian">
            <mean>0.0</mean>
            <stddev>1.7e-2</stddev>
            <bias_mean>0.1</bias_mean>
            <bias_stddev>0.001</bias_stddev>
          </noise>
        </y>
        <z>
          <noise type="gaussian">
            <mean>0.0</mean>
            <stddev>1.7e-2</stddev>
            <bias_mean>0.1</bias_mean>
            <bias_stddev>0.001</bias_stddev>
          </noise>
        </z>
      </linear_acceleration>
    </imu>
  </sensor>
</gazebo>

<!-- Wheels -->
  <xacro:macro name="wheel" params="prefix x_reflect y_reflect">
    <link name="${prefix}_link">
      <visual>
        <origin xyz="0 0 0" rpy="${pi/2} 0 0"/>
        <geometry>
            <cylinder radius="${wheel_radius}" length="${wheel_width}"/>
        </geometry>
        <material name="Gray">
          <color rgba="0.5 0.5 0.5 1.0"/>
        </material>
      </visual>

      <collision>
        <origin xyz="0 0 0" rpy="${pi/2} 0 0"/>
        <geometry>
          <cylinder radius="${wheel_radius}" length="${wheel_width}"/>
        </geometry>
      </collision>

      <xacro:cylinder_inertia m="${wheel_mass}" r="${wheel_radius}" h="${wheel_width}"/>

    </link>

    <joint name="${prefix}_joint" type="continuous">
      <parent link="base_link"/>
      <child link="${prefix}_link"/>
      <origin xyz="${x_reflect*(wheel_xoff - 2*wheel_radius)} ${y_reflect*(wheel_yoff + wheel_width/2)} ${wheel_radius}" rpy="0 0 0"/>
      <axis xyz="0 1 0"/>
    </joint>
  </xacro:macro>

<!-- indices: f-front, b-back, l-left, r-right -->
  <xacro:wheel prefix="drivewhl_fl" x_reflect="2" y_reflect="1" />
  <xacro:wheel prefix="drivewhl_fr" x_reflect="2" y_reflect="-1" />
  <xacro:wheel prefix="drivewhl_bl" x_reflect="-2" y_reflect="1" />
  <xacro:wheel prefix="drivewhl_br" x_reflect="-2" y_reflect="-1" />

<gazebo>

 <!-- <plugin name='skid_steer_drive' filename='libgazebo_ros_diff_drive.so'> -->
<plugin name='diff_drive' filename='libgazebo_ros_diff_drive.so'>

        <!-- Number of wheel pairs -->
        <num_wheel_pairs>2</num_wheel_pairs>

        <!-- wheels0 -->
        <left_joint>drivewhl_fl_joint</left_joint>
        <right_joint>drivewhl_fr_joint</right_joint>

        <!-- wheels1-->
        <left_joint>drivewhl_bl_joint</left_joint>
        <right_joint>drivewhl_br_joint</right_joint>

        <!-- kinematics -->
        <wheel_separation>0.6</wheel_separation>
        <wheel_diameter>0.2</wheel_diameter>

        <!-- limits -->
        <max_wheel_torque>20</max_wheel_torque>
        <max_wheel_acceleration>1.0</max_wheel_acceleration>

        <!-- output -->
        <publish_odom>true</publish_odom>
        <publish_odom_tf>true</publish_odom_tf>
        <publish_wheel_tf>true</publish_wheel_tf>

        <odometry_frame>odom</odometry_frame>
        <robot_base_frame>base_link</robot_base_frame>

      </plugin>

</gazebo>

<link name="camera_link">
  <visual>
    <origin xyz="0 0 0" rpy="0 0 0"/>
    <geometry>
      <box size="0.015 0.130 0.022"/>
    </geometry>
  </visual>

  <collision>
    <origin xyz="0 0 0" rpy="0 0 0"/>
    <geometry>
      <box size="0.015 0.130 0.022"/>
    </geometry>
  </collision>

  <inertial>
    <origin xyz="0 0 0" rpy="0 0 0"/>
    <mass value="0.035"/>
    <inertia ixx="0.001"  ixy="0"  ixz="0" iyy="0.001" iyz="0" izz="0.001" />
  </inertial>
</link>

<joint name="camera_joint" type="fixed">
  <parent link="base_link"/>
  <child link="camera_link"/>
  <origin xyz="${base_length/2} 0 ${wheel_radius + base_height/8}" rpy="0 0.78 0"/>
</joint>

<link name="camera_depth_frame"/>

<joint name="camera_depth_joint" type="fixed">
  <origin xyz="0 0 0" rpy="${-pi/2} 0 ${-pi/2}"/>
  <parent link="camera_link"/>
  <child link="camera_depth_frame"/>
</joint>

<gazebo reference="camera_link">
  <sensor name="depth_camera" type="depth">
    <visualize>true</visualize>
    <update_rate>30.0</update_rate>
    <camera name="camera">
      <horizontal_fov>1.047198</horizontal_fov>
      
      <clip>
        <near>0.05</near>
        <far>3</far>
      </clip>
    </camera>
    <plugin name="depth_camera_controller" filename="libgazebo_ros_camera.so">
      <baseline>0.2</baseline>
      <alwaysOn>true</alwaysOn>
      <updateRate>0.0</updateRate>
      <frame_name>camera_depth_frame</frame_name>
      <pointCloudCutoff>0.5</pointCloudCutoff>
      <pointCloudCutoffMax>3.0</pointCloudCutoffMax>
      <distortionK1>0</distortionK1>
      <distortionK2>0</distortionK2>
      <distortionK3>0</distortionK3>
      <distortionT1>0</distortionT1>
      <distortionT2>0</distortionT2>
      <CxPrime>0</CxPrime>
      <Cx>0</Cx>
      <Cy>0</Cy>
      <focalLength>0</focalLength>
      <hackBaseline>0</hackBaseline>
    </plugin>
  </sensor>
</gazebo>

<link name="ultrasonic_link">
  <visual>
    <origin xyz="0 0 ${0.055/2}" rpy="0 0 0"/>
    <geometry>
      <box size="0.015 0.130 0.022"/>
    </geometry>
  </visual>

  <collision>
    <origin xyz="0 0 ${0.055/2}" rpy="0 0 0"/>
    <geometry>
      <box size="0.015 0.130 0.022"/>
    </geometry>
  </collision>

  <inertial>
    <origin xyz="0 0 ${0.055/2}" rpy="0 0 0"/>
    <mass value="0.035"/>
    <inertia ixx="0.001"  ixy="0"  ixz="0" iyy="0.001" iyz="0" izz="0.001" />
  </inertial>
</link>

<joint name="ultrasonic_joint" type="fixed">
  <parent link="base_link"/>
  <child link="ultrasonic_link"/>
  <origin xyz="${base_length/2} 0 ${wheel_radius + base_height/2}" rpy="0 0 0"/>
</joint>

<gazebo reference="ultrasonic_link">
  <sensor name="ultrasonic_sensor" type="ray">
    <always_on>true</always_on>
    <visualize>false</visualize>
    <update_rate>5</update_rate>
    <ray>
      <scan>
        <horizontal>
          <samples>5</samples>
          <resolution>1.000000</resolution>
          <min_angle>-0.12</min_angle>
          <max_angle>0.12</max_angle>
        </horizontal>
        <vertical>
          <samples>5</samples>
          <resolution>1.000000</resolution>
          <min_angle>-0.01</min_angle>
          <max_angle>0.01</max_angle>
        </vertical>
      </scan>
      <range>
        <min>0.02</min>
        <max>4</max>
        <resolution>0.01</resolution>
      </range>
      <noise>
        <type>gaussian</type>
        <mean>0.0</mean>
        <stddev>0.01</stddev>
      </noise>
    </ray>
    <plugin name="ultrasonic_sensor_1" filename="libgazebo_ros_ray_sensor.so">
      <ros>
        <remapping>~/out:=ultrasonic_sensor</remapping>
      </ros>
      <output_type>sensor_msgs/Range</output_type>
      <radiation_type>ultrasound</radiation_type>
      <frame_name>ultrasonic_link</frame_name>
    </plugin>
  </sensor>  
</gazebo>

<link name="GNSS_link">
  <visual>
    <geometry>
      <box size="0.1 0.1 0.1"/>
    </geometry>
  </visual>

  <collision>
    <geometry>
      <box size="0.1 0.1 0.1"/>
    </geometry>
  </collision>

  <xacro:box_inertia m="0.1" w="0.1" d="0.1" h="0.1"/>
</link>

<joint name="GNSS_joint" type="fixed">
  <parent link="base_link"/>
  <child link="GNSS_link"/>
  <origin xyz="0 0.05 ${wheel_radius + base_height + 0.1/2}"/>
</joint>

<gazebo reference="GNSS_link">
<sensor name="tb3_gps" type="gps">
  <always_on>true</always_on>
  <update_rate>1</update_rate>
  <pose>0 0 0 0 0 0</pose>
  <gps>
    <position_sensing>
      <horizontal>
        <noise type="gaussian">
          <mean>0.0</mean>
          <stddev>0.01</stddev>
        </noise>
      </horizontal>
      <vertical>
        <noise type="gaussian">
          <mean>0.0</mean>
          <stddev>0.01</stddev>
        </noise>
      </vertical>
    </position_sensing>
  </gps>
  <plugin name="my_gps_plugin" filename="libgazebo_ros_gps_sensor.so">
    <ros>
      <remapping>~/out:=/gps/fix</remapping>
    </ros>
  </plugin>
</sensor>
</gazebo>

 <link name="lidar_link">
    <collision>
      <origin xyz="0 0 0" rpy="0 0 0"/>
      <geometry>
         <cylinder radius="0.0508" length="0.055"/>
      </geometry>
    </collision>

    <visual>
      <origin xyz="0 0 0" rpy="0 0 0"/>
      <geometry>
         <cylinder radius="0.0508" length="0.055"/>
      </geometry>
    </visual>

    <xacro:cylinder_inertia m="0.01" r="0.0508" h="0.055"/>
  </link>

  <joint name="lidar_joint" type="fixed">
    <parent link="base_link"/>
    <child link="lidar_link"/>
      <origin xyz="0.2 0 ${wheel_radius + base_height + 0.1/2}"/>
  </joint>

  <gazebo reference="lidar_link">
    <sensor name="lidar" type="ray">
      <always_on>true</always_on>
      <visualize>true</visualize>
      <update_rate>5</update_rate>
      <ray>
        <scan>
          <horizontal>
            <samples>70</samples>
            <resolution>1.000000</resolution>
            <min_angle>-0.61</min_angle>
            <max_angle>0.61</max_angle>
          </horizontal>
        </scan>
        <range>
          <min>0.120000</min>
          <max>3.5</max>
          <resolution>0.015000</resolution>
        </range>
        <noise>
          <type>gaussian</type>
          <mean>0.0</mean>
          <stddev>0.01</stddev>
        </noise>
      </ray>
      <plugin name="scan" filename="libgazebo_ros_ray_sensor.so">
        <ros>
          <remapping>~/out:=scan</remapping>
        </ros>
        <output_type>sensor_msgs/LaserScan</output_type>
        <frame_name>lidar_link</frame_name>
      </plugin>
    </sensor>
  </gazebo>

</robot>

This is my launch file slam_algorithm.launch.py:

#!/usr/bin/env python3

# Copyright 2019 ROBOTIS CO., LTD.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
# Authors: Joep Tool

import os
from ament_index_python.packages import get_package_share_directory
from launch import LaunchDescription
from launch import actions
from launch.actions import IncludeLaunchDescription
from launch.launch_description_sources import PythonLaunchDescriptionSource
from launch.substitutions import LaunchConfiguration, Command
import launch_ros

def generate_launch_description():

    pkg_share = launch_ros.substitutions.FindPackageShare(package='slam_train').find('slam_train')
    pkg_gazebo_ros = get_package_share_directory('gazebo_ros')
    path_lafette = os.path.join(pkg_share, 'urdf', 'Lafette_small_urdf.urdf')
    bringup_dir = get_package_share_directory('nav2_bringup')
    launch_dir = os.path.join(bringup_dir, 'launch')

    world = os.path.join(get_package_share_directory('turtlebot3_gazebo'), 'worlds', 'turtlebot3_house.world')
    #world = os.path.join(os.environ['HOME'], '.gazebo', 'models', 'factory', 'factory.model')
    #world = os.path.join(pkg_share, 'worlds', 'baylands.world')

    declare_use_sim_time = actions.DeclareLaunchArgument(name='use_sim_time', default_value='true', description='Use simulation (Gazebo) clock if true')

    declare_model_urdf = actions.DeclareLaunchArgument(name='model', default_value=path_lafette, description='Absolute path to robot urdf file')

    robot_state_publisher_node = launch_ros.actions.Node(
        package='robot_state_publisher',
        executable='robot_state_publisher',
        parameters=[{'robot_description': Command(['xacro ', LaunchConfiguration('model')])}]
    )

    joint_state_publisher_node = launch_ros.actions.Node(
        package='joint_state_publisher',
        executable='joint_state_publisher',
        name='joint_state_publisher',
        parameters=[{'use_sim_time' : LaunchConfiguration('use_sim_time')}]
    )

    gzserver_cmd = IncludeLaunchDescription(
        PythonLaunchDescriptionSource(
            os.path.join(pkg_gazebo_ros, 'launch', 'gzserver.launch.py')
        ),
        launch_arguments={'world': world}.items()
    )

    gzclient_cmd = IncludeLaunchDescription(
        PythonLaunchDescriptionSource(
            os.path.join(pkg_gazebo_ros, 'launch', 'gzclient.launch.py')
        )
    )

    spawn_entity = launch_ros.actions.Node(
        package='gazebo_ros',
        executable='spawn_entity.py',
        arguments=[
            '-entity', 'Lafette',
            '-topic', 'robot_description',
            '-x', '-1.8',  # X position
            '-y', '0.0',  # Y position
            '-z', '0.0',  # Z position (usually 0 for ground-based robots)
            '-Y', '0.0'  # Yaw orientation (in radians; here, 1.57 radians equals 90 degrees)
            ],
        output='screen'
        )

    default_rviz_config_path = os.path.join(pkg_share, 'rviz/urdf_config.rviz')

       # Declare the launch argument
    declare_rviz_config = actions.DeclareLaunchArgument(name='rvizconfig', default_value=default_rviz_config_path, description='path to rviz config file')

    rviz_node = launch_ros.actions.Node(
        package='rviz2',
        executable='rviz2',
        name='rviz2',
        output='screen',
        arguments=['-d', LaunchConfiguration('rvizconfig')],
    )

    slam_toolbox_dir = os.path.join(get_package_share_directory('slam_toolbox'), 'launch')

    slam_toolbox_launch = IncludeLaunchDescription(PythonLaunchDescriptionSource(os.path.join(slam_toolbox_dir, 'online_async_launch.py')),
        launch_arguments={
            'slam_params_file': os.path.join(pkg_share, 'config_slam', 'mapper_params_online_async.yaml'),
            'use_sim_time': LaunchConfiguration('use_sim_time')
            }.items()
    )

    ld = LaunchDescription()

    # Add the commands to the launch description

    # variables
    ld.add_action(declare_use_sim_time)
    ld.add_action(declare_model_urdf)

    #interfaces
    ld.add_action(gzserver_cmd)
    ld.add_action(gzclient_cmd)

    ##packages
    ld.add_action(robot_state_publisher_node)
    ld.add_action(joint_state_publisher_node)
    ld.add_action(spawn_entity)
    ld.add_action(declare_rviz_config)
    ld.add_action(rviz_node)
    ld.add_action(slam_toolbox_launch)

    return ld

This is the configuration file mapper_params_online_async.yaml:

slam_toolbox:
  ros__parameters:

    # Plugin params
    solver_plugin: solver_plugins::CeresSolver
    ceres_linear_solver: SPARSE_NORMAL_CHOLESKY
    ceres_preconditioner: SCHUR_JACOBI
    ceres_trust_strategy: LEVENBERG_MARQUARDT
    ceres_dogleg_type: TRADITIONAL_DOGLEG
    ceres_loss_function: None

    # ROS Parameters
    odom_frame: odom
    map_frame: map
    base_frame: base_footprint
    scan_topic: /scan
    odom_topic: /odom
    #use_map_saver: true
    mode: mapping # localization

    # if you'd like to immediately start continuing a map at a given pose
    # or at the dock, but they are mutually exclusive, if pose is given
    # will use pose
    #map_file_name: /home/arad2456/ros2_ws/map_serial
    #map_start_pose: [-1.8, 0.0, 0.0]
    map_start_at_dock: true

    debug_logging: false
    throttle_scans: 1
    transform_publish_period: 0.02 #if 0 never publishes odometry
    map_update_interval: 5.0
    resolution: 0.05
    max_laser_range: 20.0 #for rastering images
    minimum_time_interval: 0.5
    transform_timeout: 0.2
    tf_buffer_duration: 30.
    stack_size_to_use: 40000000 #// program needs a larger stack size to serialize large maps
    enable_interactive_mode: true

    # General Parameters
    use_scan_matching: true
    use_scan_barycenter: true
    minimum_travel_distance: 0.1
    minimum_travel_heading: 0.15
    scan_buffer_size: 10
    scan_buffer_maximum_scan_distance: 10.0
    link_match_minimum_response_fine: 0.1  
    link_scan_maximum_distance: 1.5
    loop_search_maximum_distance: 3.0
    do_loop_closing: true 
    loop_match_minimum_chain_size: 10           
    loop_match_maximum_variance_coarse: 3.0  
    loop_match_minimum_response_coarse: 0.35    
    loop_match_minimum_response_fine: 0.45

    # Correlation Parameters - Correlation Parameters
    correlation_search_space_dimension: 0.5
    correlation_search_space_resolution: 0.01
    correlation_search_space_smear_deviation: 0.1 

    # Correlation Parameters - Loop Closure Parameters
    loop_search_space_dimension: 8.0
    loop_search_space_resolution: 0.05
    loop_search_space_smear_deviation: 0.03

    # Scan Matcher Parameters
    distance_variance_penalty: 0.5      
    angle_variance_penalty: 1.0    

    fine_search_angle_offset: 0.00349     
    coarse_search_angle_offset: 0.349   
    coarse_angle_resolution: 0.0349        
    minimum_angle_penalty: 0.9
    minimum_distance_penalty: 0.5
    use_response_expansion: true
SteveMacenski commented 3 months ago

Frankly the type of SLAM that this does is not well suited for low visibility lidars. Usually folks with ~70 deg field of view sensors is its a depth camera. Much of the documentation from ROS 1 about depth cameras talks about turning the depth into a virtual laser scan and then throwing it into lidar SLAM. You actually have good depth / vision SLAM solutions out there now, you shouldn't need to use lidar SLAM (much of that was written back when ORB SLAM 1-2 were brand new). So, I wouldn't recommend this direction at all - use the type of SLAM that your sensor modality actually supports natively. The lidar graph slam is suited for ~180 deg lidar views or higher. ~120 I think would probably be fine too, but that's starting to push it and I haven't experimented. It is meant for the more typical lidar applications that use 180-270 (and obviously 360 is optimal). The metrics that the original description of the ticket shows agrees with that statement as well

arad2456 commented 3 months ago

Frankly the type of SLAM that this does is not well suited for low visibility lidars. Usually folks with ~70 deg field of view sensors is its a depth camera. Much of the documentation from ROS 1 about depth cameras talks about turning the depth into a virtual laser scan and then throwing it into lidar SLAM. You actually have good depth / vision SLAM solutions out there now, you shouldn't need to use lidar SLAM (much of that was written back when ORB SLAM 1-2 were brand new). So, I wouldn't recommend this direction at all - use the type of SLAM that your sensor modality actually supports natively. The lidar graph slam is suited for ~180 deg lidar views or higher. ~120 I think would probably be fine too, but that's starting to push it and I haven't experimented. It is meant for the more typical lidar applications that use 180-270 (and obviously 360 is optimal). The metrics that the original description of the ticket shows agrees with that statement as well

Ok, I'll check up on that, thank you!