I'm experimenting with creating a colored point cloud from raw data without OpenGL. When I was just creating a normal pointcloud, I used getPointCloudDepthPos();, which could be limited by setting .setLowThresholdPC() and .setHighThresholdPC().
However, to align the color data with the cloud data, I'm using .getPointCloudColorPos(), which returns a much higher res point cloud. I was hoping by setting thresholds, I could improve performance, but this cloud doesn't respond to my setting the thresholds. Below is a screenshot. The depth cloud image shows how narrow my threshold is set, yet you can see the point cloud captures the whole room, and my FPS counter is at about 3 fps.
Should the color point cloud respond to the threshold? If not, is there a way to align color info with the cloud returns by .getPointCloudDepthPos()?
Below is the code I am working with:
import java.nio.*;
import KinectPV2.*;
KinectPV2 kinect;
int vertLoc;
//transformations
float a = 3.1;
int zval = 700;
float scaleVal = 2.0;
//value to scale the depth point when accessing each individual point in the PC.
float scaleDepthPoint = 100.0;
//Distance Threashold
int maxD = 1200; // 4m
int minD = 700; // 0m
//VBO buffer location in the GPU
int vertexVboId;
public void setup() {
size(1280, 720, P3D);
kinect = new KinectPV2(this);
kinect.enableDepthImg(true);
kinect.enableColorImg(true);
kinect.enableColorPointCloud(true);
kinect.enablePointCloud(true);
kinect.setLowThresholdPC(minD);
kinect.setHighThresholdPC(maxD);
kinect.init();
print("width: " + str(kinect.WIDTHDepth));
}
public void draw() {
background(0);
//draw the depth capture images
image(kinect.getColorImage(), 0, 0, 320, 240);
image(kinect.getPointCloudDepthImage(), 320, 0, 320, 240);
stroke(255);
text(frameRate, 640, 10);
//translate the scene to the center
translate(width / 2, height / 2, zval);
scale(scaleVal, -1 * scaleVal, scaleVal);
rotate(a, 0.0f, 1.0f, 0.0f);
// Threahold of the point Cloud.
kinect.setLowThresholdPC(minD);
kinect.setHighThresholdPC(maxD);
//get the points in 3d space
FloatBuffer pointCloudBuffer = kinect.getPointCloudColorPos();
//get the color for each point of the cloud Points
FloatBuffer colorBuffer = kinect.getColorChannelBuffer();
// obtain XYZ the values of the point cloud
for(int i = 0; i < kinect.WIDTHColor * kinect.HEIGHTColor; i+=50){
float x = pointCloudBuffer.get(i*3 + 0) * scaleDepthPoint;
float y = pointCloudBuffer.get(i*3 + 1) * scaleDepthPoint;
float z = pointCloudBuffer.get(i*3 + 2) * scaleDepthPoint;
//obtain the RGB values of the color buffer (rgb values out of order? bgr??)
float b = colorBuffer.get(i*3 + 0);
float g = colorBuffer.get(i*3 + 1);
float r = colorBuffer.get(i*3 + 2);
stroke(r,g,b);
point(x, y, z);
}
}
public void mousePressed() {
// saveFrame();
}
public void keyPressed() {
if (key == 'a') {
zval +=100;
println("Z Value "+zval);
}
if (key == 's') {
zval -= 100;
println("Z Value "+zval);
}
if (key == 'z') {
scaleVal += 1;
println("Scale scene: "+scaleVal);
}
if (key == 'x') {
scaleVal -= 1;
println("Scale scene: "+scaleVal);
}
if (key == 'q') {
a += .1;
println("rotate scene: "+ a);
}
if (key == 'w') {
a -= .1;
println("rotate scene: "+a);
}
if (key == '1') {
minD += 10;
println("Change min: "+minD);
}
if (key == '2') {
minD -= 10;
println("Change min: "+minD);
}
if (key == '3') {
maxD += 10;
println("Change max: "+maxD);
}
if (key == '4') {
maxD -= 10;
println("Change max: "+maxD);
}
if(key == 'c'){
scaleDepthPoint += 1;
println("Change Scale Depth Point: "+scaleDepthPoint);
}
if(key == 'v'){
scaleDepthPoint -= 1;
println("Change Scale Depth Point: "+scaleDepthPoint);
}
}
I'm experimenting with creating a colored point cloud from raw data without OpenGL. When I was just creating a normal pointcloud, I used
getPointCloudDepthPos();
, which could be limited by setting.setLowThresholdPC()
and.setHighThresholdPC()
.However, to align the color data with the cloud data, I'm using
.getPointCloudColorPos()
, which returns a much higher res point cloud. I was hoping by setting thresholds, I could improve performance, but this cloud doesn't respond to my setting the thresholds. Below is a screenshot. The depth cloud image shows how narrow my threshold is set, yet you can see the point cloud captures the whole room, and my FPS counter is at about 3 fps.Should the color point cloud respond to the threshold? If not, is there a way to align color info with the cloud returns by
.getPointCloudDepthPos()
?Below is the code I am working with: