balthazarneveu / per-pixel-point-rendering

Study of ADOP: Approximate Differentiable One-Pixel Point Rendering
0 stars 0 forks source link

Per-pixel point rendering

:scroll: Report

This is a point cloud rendering (one pixel rendering) + CNN image processin (=deferred rendering to inpaint the holes between projected points).

Live inference
using a very tiny CNN (a.k.a Vanilla decoder) 100k points only

Setup

Local install of pixr

git clone https://github.com/balthazarneveu/per-pixel-point-rendering.git
cd per-pixel-point-rendering
pip install -r requirements.txt
pip install -e .

Run the demo on pre-trained scene,

python scripts/novel_views_interactive.py -e 55 -t pretrained_scene

Generating calibrated scenes

if args.scene == "material_balls":
  config = {
      "distance": 4.,
      "altitude": 0.,
      "background_map": "__world_maps/city.exr"
  }

Code structure


Tensor convention

Images

[N, C, H, W].

Geometry

[M, p, d]

Splatting of points

Fuzzy depth test (varying $\alpha$ on a scene with two very close triangles) Normal culling Multiscale splatting

Non-neuronal point based rendering : Optimizing per point color

To each point of the point cloud, we associate a color vector (later this vector will have a larger dimension, we get pseudo-colors instead of RGB).

Rendered colored point cloud - novel view synthesis Groundtruth shaded images used to get colors per point so that the final rendering is faithful

Using a fuzzy depth test

Closest point Fuzzy depth test

To reproduce this demo? python studies/interactive_projections.py -n 200000 -s test_aliasing. Can take some time to sample the point cloud from triangles