Single-View | Ignatius | Family | Palace | Church | Barn |
---|---|---|---|---|---|
Text2Nerf | |||||
MotionCtrl | |||||
NVS-Solver (DGS) | |||||
NVS-Solver (Post) |
Multi-View | Truck | Playground | Caterpillar | Scan55 | Scan3 |
---|---|---|---|---|---|
3D-Aware | |||||
MotionCtrl | |||||
NVS-Solver (DGS) | |||||
NVS-Solver (Post) |
Mono-Vid | Train | Bus | Kangroo | Train | Deer | Street |
---|---|---|---|---|---|---|
4DGS | ||||||
MotionCtrl | ||||||
NVS-Solver (DGS) | ||||||
NVS-Solver (Post) |
To get started, please create the virtual environment by:
python -m venv .env
source .env/bin/activate
Please install the PyTorch by:
pip install torch==2.2.1 torchvision==0.17.1 torchaudio==2.2.1 --index-url https://download.pytorch.org/whl/cu121
We use PyTorch 2.2.1 with CUDA 12.0, please install the version corresponding to your CUDA.
Please install the diffusers by:
pip install diffusers["torch"] transformers
pip install accelerate
pip install -e ".[torch]"
Please install required packages by:
pip install -r requirements.txt
Run
bash demo.sh
We provide the prompt images utilized in our experiments at single image and dynamic videos.
First please prepare Depth Anything V2. As our pipeline read depth as npy file, so please edit the run.py for saving predicted depth maps as npy:
depth_np = depth
np.save(os.path.join(args.outdir, os.path.splitext(os.path.basename(filename))[0] + '.npy'),depth_np)
Put your image to a directory, e.g. /path_to_img
.
DIR_PATH=/path_to_img
IMG_PATH=/path_to_img/001.jpg
DEPTH_PATH="${DIR_PATH}/depth"
mkdir -p "$DEPTH_PATH"
Predict depth for your image
cd /your_path_to_Depth-Anything-V2
python run.py --encoder vitl --img-path "$IMG_PATH" --outdir "$DEPTH_PATH"
cd /your_path_to_NVS_Solver
python svd_interpolate_single_img_traj.py --image_path "$IMG_PATH" --folder_path "$DIR_PATH" --iteration any_trajectory --radius 40 --end_position 30 2 -10 --lr 0.02 --weight_clamp 0.2
--raidus
is the distance from the original camera and the center of the image, i.e., the depth of the center pixel, may need to change to accommodate different images. The original camera position is set to [radius,0,0]
.
--end_position
is where the camera trajectory ends at as you like. It need three inputs for the camera position in X, Y, Z. The trajectory will be generated between original camera position to end position and the camera will always face to the center object of the given image.
Thanks for the following wonderful works: Diffusers, Depth Anything, Depth AnythingV2..
If you find the project is interesting, please cite
@article{you2024nvs,
title={NVS-Solver: Video Diffusion Model as Zero-Shot Novel View Synthesizer},
author={You, Meng and Zhu, Zhiyu and Liu, Hui and Hou, Junhui},
journal={arXiv preprint arXiv:2405.15364},
year={2024}
}