Before you begin, ensure you have the following:
mkdir -p ~/utexas_ws/src
src
folder of your workspace:
cd ~/utexas_ws/src
git clone git@github.com:UTNuclearRobotics/utexas_sterling.git
rosdep
to install dependencies:
cd ~/utexas_ws
rosdep update
rosdep install --from-paths src --ignore-src -r -y
colcon
to build the workspace:
colcon build
source install/setup.bash
The workflow consists of three main phases:
Record sensor data from a robot into a rosbag. Update the topic names in the configuration file. To start recording with the specified parameters, use the following command:
ros2 launch visual_representation_learning record_rosbag.launch.py
bag_name
: Name of the ROS bag to save in the bags
directory.bags
directory in the top level workspace directory.rosbag.yaml
record_rosbag.launch.py
Convert the recorded rosbag data into a Python dictionary format suitable for PyTorch training. To launch the process_rosbag
with the necessary parameters, use the following command:
ros2 launch visual_representation_learning process_rosbag.launch.py
bag_name
: Name of the ROS bag to process in the bags
directory.visual
: Set to true
to enable graphical feedback of data.datasets
directory in the top level workspace directory.rosbag.yaml
process_rosbag.launch.py
process_rosbag.py
Convert pickle files into tensors and train the terrain representations using a PyTorch script. To start the training process, use the command:
ros2 run visual_representation_learning train_autoencoder_representations
--config
: Name of the yaml
file that defines training and validation pickle file datasets..ckpt
) file is saved in torch/terrain_representations/checkpoints
..pt
) file is saved in torch/terrain_representations/models
.istat.yaml
dataset.yaml
data_loader.py
train_auto_encoder.py