Update: October 2023
We are happy to announce that an extended version of our previous work has been published in the IEEE Transactions in Aerospace and Electronic Systems.
We have updated the repository to include:
If you find our work or code useful, please cite:
@article{perez2023spacecraft,
title={Spacecraft Pose Estimation: Robust 2D and 3D-Structural Losses and Unsupervised Domain Adaptation by Inter-Model Consensus},
author={P{\'e}rez-Villar, Juan Ignacio Bravo and Garc{\'\i}a-Mart{\'\i}n, {\'A}lvaro and Besc{\'o}s, Jes{\'u}s and Escudero-Vi{\~n}olo, Marcos},
journal={IEEE Transactions on Aerospace and Electronic Systems},
year={2023},
publisher={IEEE}
}
This paper presents the second ranking solution to the Kelvins Pose Estimation 2021 Challenge. The proposed solution has ranked second in both Sunlamp and Lightbox categories, with the best total average error over the two datasets.
The main contributions of the paper are:
The proposed architecture with the losses incorporating the 3D information are depicted in the following figure:
This section contains the instructions to execute the code. The repository has been tested in a system with:
You can download the original SPEED+ dataset from Zenodo. The dataset has the following structure:
SPEED+ provides the ground-truth information as pairs of images and poses (relative position and orientation of the spacecraft w.r.t the camera). Our method assumes the ground-truth is provided as key-point maps. We generate the key-point maps prior to the training to improve the speed. You can choose to download our computed key-points or create them manually.
Download and decompress the kptsmap.zip file. Place the kptsmap folder under the synthetic folder of the speedplus dataset.
Notes from update: These heatmaps only work with the data loader "loaders/speedplus_segmentation_precomputed.py".
We provide two methods to generate the heatmaps:
python create_maps.py --cfg configs/experiment.json
Note: if heatmaps based on .npz files are to be used, use them in conjuction with the data loader "loaders/speedplus_segmentation_precomputed.py"
python create_maps_image.py --cfg configs/experiment.json
Note: if heatmaps based on .png files are to be used, use them in conjuction with the data loader "loaders/speedplus_segmentation_precomputed_image.py"
Please make sure that the correct "split_submission" field is in the config file before generation.
Place the keypoints file "kpts.mat" into the speed_root folder
To clone the repository, type in your terminal:
git clone https://github.com/JotaBravo/spacecraft-uda.git
After instaling conda go to the spacecraft-uda folder and type in your terminal:
conda env create -f env.yml
conda activate spacecraft-uda
The training process is controlled with configuration files defined in .json. You can find example configuration files under the folder "configs/".
To train a model simply modify the configuration file with your required values. NOTE: The current implementation only supports squared images.
Then, after properly modifying the configuration file under the repository folder type:
python main.py --cfg "configs/experiment.json"
Notes from update: If you wish to use a simpler ResNet model please execute the following command:
python main_resnet.py --cfg "configs/experiment_resnet34.json"
And make sure that the "resnet_size" field in the config is available.
The script will take the initial configuration file and the training weights associated to that training file to generate pseudo-labels and train a new model. Every iteration a new configuration file is generated automatically so the results are not overwritten.
To train the pseudo-labelling loop you first need to configure the "main_loop.py" script by specifying the path to the folder where the configuration files will be stored, the initial configuration file and the number of iterations. In each iteration a new configuration file will be created in the BASE_CONFIG folder with an increased niter counter. For example you first create the folder "configs_loop_sunlamp_10_epoch" and place the config file "loop_sunlamp_niter_0000.json" under it. For the next iteration of the pseudolabelling a new configuration file loop_sunlamp_niter_0001.json will be created.
NITERS = 100
BASE_CONFIG = "configs_loop_sunlamp_10_epoch" # folder path
BASE_FILE = "loop_sunlamp_niter_0000.json"
After you have crated the configuration files, you will need to manually place the weights used for the first iteration of the pseudo-labelling process. Under the "results" folder create a folder with the BASE_CONFIG name, and then another subfolder with the BASE_FILE name. For example "results/configs_loop_sunlamp_10_epoch/loop_sunlamp_niter_0000.json". Under that folder place a new subfolder called "ckpt" containing a file of weights named "init.pth". The final result should look as "results/configs_loop_sunlamp_10_epoch/loop_sunlamp_niter_0000.json/ckpt/init.pth"
The init.pth should be the weights of the model trained over the synthetic domain. If you want to skip that training phase you can use our available weights in Section 5 of this page.
Go to the folder where you have the dataset saved and duplicate the Sunlamp and Lightbox folders, renaming the new ones as "sunlamp_train" and "lightbox_train". In these folders the new pseudo-labels will be stored and generated.
python main_loop.py
You can monitor the training process via TensorBoard by typing in the command line:
tensorboard --logdir="path to your logs folder"
This work is supported by Comunidad Autónoma de Madrid (Spain) under the Grant IND2020/TIC-17515
[1] - Xiao, B., Wu, H., & Wei, Y. (2018). Simple baselines for human pose estimation and tracking. In Proceedings of the European conference on computer vision (ECCV) (pp. 466-481).