This repository provides the code of the paper "MultiTest: Physical-Aware Object Insertion for Testing Multi-sensor Fusion Perception Systems"
MultiTest employs a physical-aware approach to render modality-consistent object instances using virtual sensors to for Testing Multi-sensor Fusion (MSF) Perception Systems.
Figure above presents the high-level workflow of MultiTest. Given a background multi-modal data recorded from real-world and an object instance selected from the object database, MultiTest first executes the pose estimation module to calculate the valid locations and orientations of an object to be inserted. Then the multi-sensor simulation module renders the object instance in the form of both image and point cloud given the calculated poses in a physical-aware virtual simulator. The multi-sensor simulation module further merges the synthesized image and point cloud of the inserted object with the background data and carefully handles the occlusion. These two modules form the MultiTest’s multi-modal test data generation pipeline. Finally, the realistic multi-modal test data can be efficiently generated through fitness guided metamorphic testing. We detail each module of MultiTest in the following.
Main Folder Structure:
The main folder structure is as follows:
MultiTest
├── _assets
│ └── shapenet object database
├── _datasets
│ ├── kitti kitti dataset
│ └── kitti_construct generate test cases
├── _queue_guided seed queue
├── system systems under test
├── blender blender script
├── config sensor and algorithm configuration
├── build package building for MultiTest
├── third third-party repository
├── eval_tools tools for evaluation AP
├── build_script.py package building script
├── evaluate_script.py system evaluating script
├── fitness_score.py fitness metric calculation
├── init.py environment setup script
├── logger.py log
├── visual.py data visualisation script
├── demo.py quick start demo
└── main.py MultiTest main file
We implement all the MSF systems with PyTorch 1.8.0 and Python 3.7.11. All experiments are conducted on a server with an Intel i7-10700K CPU (3.80 GHz), 48 GB RAM, and an NVIDIA GeForce RTX 3070 GPU (8 GB VRAM).
Run the following command to install the dependencies
pip install -r requirements.txt
python build_script.py
Set your project path config.common_config.project_dir="YOUR/PROJECT/PATH"
Install blender.
MultiTest leverage blender, an open-source 3D computer graphics software, to build virtual camera sensor.
config.camera_config.blender_path="YOUR/BLENDER/PATH"
Install S2CRNet [optional].
MultiTest leverage S2CRNet to improve the realism of the synthesized test cases.
download repo from link to MultiTest/third/S2CRNet
git clone git@github.com:stefanLeong/S2CRNet.git
setting the config config.camera_config.is_image_refine=True
Install CENet [optional]. MultiTest leverage CENet to split road from point cloud and get accurate object positions.
MultiTest/third/CENet
git clone git@github.com:huixiancheng/CENet.git
After installing all the necessary configurations, you can run the demo.py
file we provided to generate multi-modal data:
python init.py
python demo.py
The result can be found at MultiTest/_datasets/kitti_construct/demo
. Then we can run visual.py
to visualize the synthetic data
In order to reproduce our experiment, we should install the complete dependency. Before that, we should install all the dependencies from the "Quick Start" section.
In order to reproduce our experiments, we need to carefully configure the environment for each system.
These system are derived from the MSF benchmark. Detailed configuration process are provided here.
These systems should be placed in the directory MultiTest/system/SYSTEM_NAME
MultiTest/_datasets/kitti
python main.py --system_name "SYSTEM" --select_size "SIZE" --modality "multi"
The result can be found at MultiTest/_datasets/kitti_construct/SYSTEM
.
Generation of multimodal data from 200 randomly selected seeds.
python main.py --system_name random --select_size 200
Validating the realism of synthetic image.
Install pytorch-fid from here
pip install pytorch-fid
Usage
python -m pytorch_fid "MultiTest/datasets/kitti/training/image_2" "/MultiTest/_datasets/kitti_construct/SYSTEM/training/image_2"
Validating the realism of synthetic LiDAR point cloud.
Install frd from here
Usage
python lidargen.py --fid --exp kitti_pretrained --config kitti.yml
Validating the modality-consistency of synthetic multi-modal data.
The result can be found at Multimodality/RQ/RQ1/consistent
.
Generation of multimodal data with fitness guidance from 200 randomly selected seeds.
python main.py --system_name "SYSTEM" --select_size 200
Evaluate the AP value and the number of errors with each error category on the generated test cases of a perception system.
python RQ2_tools.py --system_name "SYSTEM" --seed_num 200 iter=1
Formatting the generated data into KITTI format of a perception system for retraining
python copy_data.py --system_name "SYSTEM"
The retraining dataset can be found at _workplace_re/SYSTEM/kitti
.
Run MultiTest on a custom dataset:
config.common_config.kitti_dataset_root="YOUR/DATASET/PATH"
Run MultiTest with custom 3D models:
config.common_config.assets_dir ="YOUR/ASSETS/PATH"
Run MultiTest with custom MSF systems:
MultiTest/system/YOUR_SYSTEM_NAME
main.py
file