stevelibre / onipy

Automatically exported from code.google.com/p/onipy
GNU Lesser General Public License v3.0
0 stars 0 forks source link

Conversion of the depth map to 8bit data gets clamped #4

Open GoogleCodeExporter opened 8 years ago

GoogleCodeExporter commented 8 years ago
the call to GetGrayscale8DepthMapRaw in Python produces incorrect values for 
objects that are far away.  

The problem is caused by an incorrect cast between signed and unsigned char 
types in the wrapper code. As a result the values of depth get clamped to the 
range [0,128] instead of the expected [0,255].

This problem affects versions 0.4 alpha and prior. 

A work around for the problem is to replace the function convertToGrayScale8Raw 
in conversionHelpers.cpp with the following:

void convertToGrayscale8Raw( 
    std::string& targetMapRaw, 
    XnDepthPixel const* sourceMap, 
    XnUInt32 sourceXResolution, 
    XnUInt32 sourceYResolution )
{

    XnUInt32 rowIndex;
    XnUInt32 columnIndex;
    unsigned int imageIndex;
    XnDepthPixel currentPixelDepth;
    float currentPixelDepthNormalized;
    int currentPixelDepthQuantized;

    imageIndex = 0;
    targetMapRaw.resize( sourceXResolution * sourceYResolution );

    for( rowIndex = 0; rowIndex < sourceXResolution; ++rowIndex )
    {

        for( columnIndex = 0; columnIndex < sourceYResolution; ++columnIndex )
        {

            currentPixelDepth = sourceMap[ imageIndex ];
            currentPixelDepthNormalized = (float)currentPixelDepth / 2048.0f;
            currentPixelDepthQuantized = (int)( 
                currentPixelDepthNormalized * 128.0f );

            targetMapRaw[ imageIndex ] = 
                (char)currentPixelDepthQuantized;

            ++imageIndex;

        }   // for columns

    }   // for rows

}   // convertToGrayscale8Raw

Original issue reported on code.google.com by gnatan...@gmail.com on 2 Mar 2011 at 4:32

GoogleCodeExporter commented 8 years ago
Found this little piece online somewhere...

#!/usr/bin/env python
from freenect import sync_get_depth as get_depth, sync_get_video as get_video
import cv  
import numpy as np

def doloop():
    global depth, rgb
    while True:
        # Get a fresh frame
        (depth,_), (rgb,_) = get_depth(), get_video()

        # Build a two panel color image
        d3 = np.dstack((depth,depth,depth)).astype(np.uint8)
        da = np.hstack((d3,rgb))

        # Simple Downsample
        cv.ShowImage('both',np.array(da[::2,::2,::-1]))
        cv.WaitKey(5)

doloop()

"""
IPython usage:
 ipython
 [1]: run -i demo_freenect
 #<ctrl -c="">  (to interrupt the loop)
 [2]: %timeit -n100 get_depth(), get_rgb() # profile the kinect capture

"""

Original comment by john.mrd...@googlemail.com on 18 May 2011 at 10:48