Official code release for the ISMAR 2022 paper "Temporal View Synthesis of Dynamic Scenes through 3D Object Motion Estimation with Multi-Plane Images"
/Data/Databases/VeedDynamic/all_short
. src/utils/mpi_sintel
to extract the required data and to organize it. The following steps describe training and testing on IISc VEED-Dynamic dataset. The steps for MPI-Sintel dataset are similar and the code for each step is also provided. Environment details are available in EnvironmentData/DeCOMPnet.yml
. The environment can be created using conda
cd EnvironmentData
conda env create -f DeCOMPnet.yml
cd ..
DeCOMPnet uses a two-stage training procedure, one to train MPI flow estimation model and the other to train infilling model. Follow the below steps to train both the models
Generate necessary data for training flow estimation model - warped frame to nullify camera motion and a masks to indicating regions containing non-zero flow.
cd data_generators
python ObjectMotionIsolation01_VeedDynamic.py
python LOF_POI_01_VeedDynamic.py
cd ..
Download pre-trained ARFlow weights and place them in PretrainedModels/ARFlow
Convert ARFlow weights to convention used in this repository
cd flow_estimation/utils
python ARFlowWeightsConverter01.py
cd ../..
Train the flow estimation model
cd flow_estimation
python VeedDynamicTrainer01.py
cd ..
Estimate local flow between past frames and generate motion warped frames
cd data_generators
python LocalOpticalFlow01_VeedDynamic.py
python MotionWarping01_VeedDynamic.py
Train the disocclusion infilling model
cd video_inpainting
python VeedDynamicTrainer01.py
cd ..
To run ST-RRED, download code from here and place it in src/qa/05_CroppedSTRRED/src/matlab
. If you want to skip computing ST-RRED, comment the corresponding line in src/qa/00_Common/src/AllMetrics01_VeedDynamic.py
.
Test the model on the IISc VEED-Dynamic dataset and run QA
python VeedDynamicTester01.py
Pretrained weights of the flow estimation and disocclusion infilling are available here. Download the weights and provide the corresponding paths in the Tester01.py.
If you use DeCOMPnet model in your publication, please specify the version as well. The current version is 1.0.
MIT License
Copyright (c) 2022 Nagabhushan Somraj, Rajiv Soundararajan
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
If you use this code for your research, please cite our paper
@article{somraj2022DeCOMPnet,
title = {Temporal View Synthesis of Dynamic Scenes through 3D Object Motion Estimation with Multi-Plane Images},
author = {Somraj, Nagabhushan and Sancheti, Pranali and Soundararajan, Rajiv},
booktitle = {Proceedings of the IEEE International Symposium on Mixed and Augmented Reality (ISMAR)},
pages={817-826},
year = {2022},
doi = {10.1109/ISMAR55827.2022.00100}
}
The code and initialization weights for optical flow estimation is borrowed from ARFlow. However, significant changes have been made on top of the code and so the code might look different. The code for camera motion warping is borrowed from here.
For any queries or bugs related to either DeCOMPnet code or IISc VEED-Dynamic database, please raise an issue.