Official PyTorch implementation of Share With Thy Neighbors: Single-View Reconstruction by Cross-Instance Consistency (ECCV 2022). Check out our webpage for video results!
This repository contains:
conda env create -f environment.yml
conda activate unicorn
bash scripts/download_data.sh
This command will download one of the following datasets:
ShapeNet NMR
: paper / NMR
paper /
dataset
(33Go, thanks to the DVR
team for hosting
the data)CUB-200-2011
: paper /
webpage /
dataset
(1Go)Pascal3D+ Cars
: paper /
webpage (with ftp download link, 7.5Go) / UCMR
annotations
(bbox + train/test split, thanks to the UCMR team for hosting them) /
UNICORN annotations(3D shape ground-truth)CompCars
: paper /
webpage /
dataset
(12Go, thanks to the GIRAFFE team for
hosting the data)LSUN
: paper / webpage /
horse dataset (69Go) / moto
dataset (42Go)bash scripts/download_model.sh
We provide a small (200Mo) and a big (600Mo) version for each pretrained model (see training section for details). The command will download one of the following models:
car
trained on CompCars: car.pkl /
car_big.pklcar_p3d
trained on Pascal3D+: car_p3d.pkl /
car_p3d_big.pklbird
trained on CUB: bird.pkl /
bird_big.pklmoto
trained on LSUN Motorbike: moto.pkl /
moto_big.pklhorse
trained on LSUN Horse: horse.pkl /
horse_big.pklsn_*
trained on each ShapeNet category:
airplane,
bench,
cabinet,
car,
chair,
display,
lamp,
phone,
rifle,
sofa,
speaker,
table,
vesselsn_big_*
trained on each ShapeNet category:
airplane,
bench,
cabinet,
car,
chair,
display,
lamp,
phone,
rifle,
sofa,
speaker,
table,
vessel
You first need to download the car model (see above), then launch:
cuda=gpu_id model=car_big.pkl input=demo ./scripts/reconstruct.sh
where gpu_id
is a target cuda device id, car_big.pkl
corresponds to a pretrained model, demo
is a folder containing the target images.
Reconstruction results (.obj + gif) will be saved in a folder demo_rec
.
We also provide an interactive demo to reconstruct cars from single images.
To launch a training from scratch, run:
cuda=gpu_id config=filename.yml tag=run_tag ./scripts/pipeline.sh
where gpu_id
is a device id, filename.yml
is a config in configs
folder, run_tag
is a tag for the experiment.
Results are saved at runs/${DATASET}/${DATE}_${run_tag}
where DATASET
is the dataset name
specified in filename.yml
and DATE
is the current date in mmdd
format.
A model is evaluated at the end of training. To evaluate a pretrained model (e.g. sn_big_airplane.pkl
):
model.pkl
(e.g. in runs/shapenet_nmr/airplane_big
)resume: airplane_big
in airplane.yml
)cuda=gpu_id config=sn_big/airplane.yml tag=airplane_big_eval ./scripts/pipeline.sh
For CUB, the built-in evaluation included in the training pipeline is Mask-IoU. To evaluate PCK, run:
cuda=gpu_id tag=run_tag ./scripts/kp_eval.sh
If you want to learn a model for a custom object category, here are the key things you need to do:
custom_name
folder inside the datasets
foldercustom.yml
(or custom_big.yml
) in the configs folder: this includes changing the dataset name to custom_name
and setting all training milestonescuda=gpu_id config=custom.yml tag=custom_run_tag ./scripts/pipeline.sh
If you like this project, check out related works from our group: