News:
Accepted by ICRA 2023 (EVPP).
Accepted by RA-L 2023 (NeurAR).
This code is official implementation of the paper Efficient View Path Planning for Autonomous Implicit Reconstruction (EVPP). It implements efficient autonomous implicit 3D reconstruction.
This project is built on ashawkey/torch-ngp's NGP and TensoRF implementation.
Please refer to Install Unity and Visual Studio on Windows. Our environment includes Unity 2019.4.40 and Visual Studio 2019. Please make sure installed environment is not lower than this version.
git clone https://github.com/small-zeng/EVPP.git
cd EVPP
conda env create -f environment.yml
conda activate EVPP
The main entrances are nerfServer
and plannerServer_Object / plannerServer_Room
.
nerfServer
defines the online implicit reconstruction.
plannerServer_Object / plannerServer_Room
defines the view path planning of single object scene and room scene.
Follow the steps below to start autonomous implicit reconstruction:
After install Unity Editor and Visual Studio, you can start it by click RUN button in Unity Editor.
cd nerfServer
python manage.py runserver 0.0.0.0:6000
Make sure that the Windows and Ubuntu machines are on the same local network. Set the IP address for sending views in the planner to your Windows IP. Modify IP in plannerServer_Object, IP in plannerServer_Room.
cd plannerServer_Object / plannerServer_Room
python manage.py runserver 0.0.0.0:6100
http://10.15.198.53:6100/isfinish/?finish=yes
百度云盘: cabin scene
Download the data above, unzip it, and place it in the directory:
./nerfServer/logs
Download test data for rendering a circular view of the scene:
百度云盘: cabin_traj
mkdir data
unzip cabin_traj
After 30 minutes of training, perform a complete rendering pass around the cabin scene:
百度云盘: cabin_traj_render
cd nerfServer
python renderall.py
For the cabin scene (5m X 5m), the PSNR achieved after 30 minutes of reconstruction is 26.47.
Planned results for cabin scene are in the path:
./plannerServer_Object/core/results
For the cabin scene (5m X 5m), the planning time is 388 seconds.
@inproceedings{zeng2023efficient,
title={Efficient view path planning for autonomous implicit reconstruction},
author={Zeng, Jing and Li, Yanxu and Ran, Yunlong and Li, Shuo and Gao, Fei and Li, Lincheng and He, Shibo and Chen, Jiming and Ye, Qi},
booktitle={2023 IEEE International Conference on Robotics and Automation (ICRA)},
pages={4063--4069},
year={2023},
organization={IEEE}
}
@article{ran2023neurar,
title={NeurAR: Neural Uncertainty for Autonomous 3D Reconstruction With Implicit Neural Representations},
author={Ran, Yunlong and Zeng, Jing and He, Shibo and Chen, Jiming and Li, Lincheng and Chen, Yingfeng and Lee, Gimhee and Ye, Qi},
journal={IEEE Robotics and Automation Letters},
volume={8},
number={2},
pages={1125--1132},
year={2023},
publisher={IEEE}
}
Use this code under the MIT License. No warranties are provided. Keep the laws of your locality in mind!
Please refer to torch-ngp#acknowledgement for the acknowledgment of the original repo.