openrndr / orx

A growing library of assorted data structures, algorithms and utilities for OPENRNDR
https://openrndr.org
BSD 2-Clause "Simplified" License
121 stars 37 forks source link

Kinect pointclouds #192

Open davidmoshal opened 3 years ago

davidmoshal commented 3 years ago

Hi, this project looks great. Wondering if kinect pointclouds are supported? Are any pointcloud, or 3D libaries?

morisil commented 2 years ago

Hi @davidmoshal , I contributed the original kinect v1 support, which differs from kinect v1 support in other projects in this sense, that actual transformation of kinect raw depth data into floating point numbers is happening already on the GPU as fragment shader. I saw the formula of transforming kinect data into point cloud at some point. It is relatively simple and could also be implemented in the shader. However the question is what such a shader should output? I can imagine:

  1. using GL_POINTS and have a fragment shader encoding them out of kinect raw data as color vector into a buffer, which can be then used for drawing instances directly on GPU.
  2. preparing the whole generative mesh in similar fashion
  3. achieving similar effect should be possible by using vertex shader with texture extrusion based on the same formula of calculating point cloud coordinates

I wanted to provide these features for a long time, but I am still learning skills which would allow me to write it on top of what I have already provided. Maybe at the beginning of the next year I will add it as a generalized component over any depth camera device, to realize true predictable "3D scanner" out of any possible input.

morisil commented 2 years ago

Just for the reference, here is an article providing the RawDepthToMeters mapping and also DepthToWorld mapping for Kinect 1

http://graphics.stanford.edu/~mdfisher/Kinect.html

float RawDepthToMeters(int depthValue)
{
    if (depthValue < 2047)
    {
        return float(1.0 / (double(depthValue) * -0.0030711016 + 3.3309495161));
    }
    return 0.0f;
}

Vec3f DepthToWorld(int x, int y, int depthValue)
{
    static const double fx_d = 1.0 / 5.9421434211923247e+02;
    static const double fy_d = 1.0 / 5.9104053696870778e+02;
    static const double cx_d = 3.3930780975300314e+02;
    static const double cy_d = 2.4273913761751615e+02;

    Vec3f result;
    const double depth = RawDepthToMeters(depthValue);
    result.x = float((x - cx_d) * depth * fx_d);
    result.y = float((y - cy_d) * depth * fy_d);
    result.z = float(depth);
    return result;
}
davidmoshal commented 2 years ago

@morisil thanks! Those are awesome links, much appreciated !!

hamoid commented 2 years ago

Has something changed regarding this issue with the recent release of the depth camera orx?

morisil commented 2 years ago

@hamoid depth to meters is now implemented in shader processing raw kinect data, which is a precondition. depth to world would work the best with compute shader (or fragment shader - GPGPU for compatibility) calculating position of a buffer of instances, to avoid back trip to CPU. Either points or vertices of quads can be used. Not that difficult to implement, and I want to do it at some point, less for the sake of having a point cloud. more for the sake of having proportional camera perspective for diverse projection mapping / space mapping conditions.