Open chijuiwu opened 9 years ago
Not sure if this is useful at all, but the config file after Calibrate/Acquire is at http://pastebin.com/u8z0bX0M
Hi,
Bad things can happen if the projected image is very small in the camera image; also if the projected area is flat. These seem to be the most common issues.
Can you zip up the entire directory with the xml file, and put it in a place where I can download it? I will take a look.
Thanks
I'm having the exact same error. @thundercarrot, I would add another common issue, interreflection, which might happen at the corner of two walls and disrupts gray code scanning.
I get exception during solving RANSAC
@tcboy88, @micuat, I am happy to take a closer look at your configuration if you zip up all the acquired files and place them where I can download them.
@tcboy88, You might try rearranging the camera and/or projector so that the projected image is bigger. I'm not certain that is your issue however.
@micuat, interreflections and specularities (shiny surfaces) are problems too for the depth camera.
btw I've opened a forum for the RoomAlive Toolkit
Andy
Hi all,
I'm getting the same thing. I keep getting Null references. The Pose data from the camera is not being set during the calibration and solve stages. When it goes to the Matrix math the "A" matrix is always null when trying to multiply it out.
Please help.
@dngoins I am happy to take a closer look at your configuration if you zip up all the acquired files and place them where I can download them. The main problems people seem to be facing are that the scene is too planar (flat) or that the projection is too small in the camera's view. In these cases the Solve step may fail. There is better error checking in the 'develop' branch.
Also it seems like calibration would fail if the projection surface spans across more than one wall, e.g. if there is a wall corner.
I am still getting crash during RANSAC
I downloaded the sample data "calibration3x3" I downloaded the latest git from master branch and built it successfully I run "CalibrateEnsemble.exe" in the debug folder I open and reload the "ensemble.xml", I did not change anything I click "solve" and then it crash during RANSAC
I am using Windows 8.1 64bit, i7-5820k and GTX970. Anyone know why?
extra: Running "CalibrateEnsemble.exe" in the release folder results in immediate crash.
Can you run it in the debugger and see exactly where it fails?
thanks for the quick response!
By setting "CalibrateEnsemble" as StartUp project in Visual Studio 2013, I have successfully run the calibration by opening the provided "ensemble.xml". However, as far as I know, the provided "ensemble.xml" is already calibrated.
So I tried a single pro-cam calibration, by keeping only the "cameracenter" and "projectorcenter" files from the "calibration3x3" sample data, and delete the "left" and "right" data. I create a new xml with 1 projector and 1 camera, and try changing the displayIndex to both 0 and 1. I reload this newly created xml and click "solve". It crashes at "var depthFrameToCameraSpaceTable = calibration.ComputeDepthFrameToCameraSpaceTable();"
I uploaded the directory to dropbox @ https://dl.dropboxusercontent.com/u/18371015/tcboy.rar Basically it is just a stripped down version of the sample data "calibration3x3".
Thanks for uploading your files.
The problem is that the "Solve" procedure assumes that the 'calibration' section of the camera is populated. Note this calibration information is related to only the Kinect camera. This must be saved somewhere in the current scheme where "Acquire" uses live connections to the camera and projectors and saves everything to disk, while "Solve" works completely offline. You can fix this by copying this section back in to the .xml file. I may add some error checking to handle this case, since I've seen it before.
Thanks you very much for the response! I have successfully run the calibration (acquire & solve) and the projection samples. Previously my problem was with directly running the .exe, it always crashed during RANSAC. But now I run it from Visual Studio, and it runs perfectly. Thanks for the great work!!!
by the way, is there any plan for Unity plugin?
I recently got eh calibration and ProjectionMappingSample working. I would like to attempt to take the data aquired from the calibration step and simply overlay depth color onto objects in the scene as well as a person walking through the scene. This seems like it should be a relatively straightforward implementation although I am struggling with where to get started. Any suggestions?
Try starting with the RadialWobble pixel shader (RadialWobblePS.hlsl). Note that PSInput includes the (interpolated) vertex position. Try emitting a color that is a function of pos.z. Lucky for you, that shader code also includes RGB/HSV color conversion routines. Good luck!
Thanks for the help! I have been playing with this shader but so far have only been able to output solid colors to the whole screen. I have been digging around although this code is all new to me (Umore familiar with c# and Unity programming) Any further insight would be greatly appreciated!
cheers!
On Wed, Oct 21, 2015 at 8:55 PM, Andy Wilson notifications@github.com wrote:
Try starting with the RadialWobble pixel shader (RadialWobblePS.hlsl). Note that PSInput includes the (interpolated) vertex position. Try emitting a color that is a function of pos.z. Lucky for you, that shader code also includes RGB/HSV color conversion routines. Good luck!
— Reply to this email directly or view it on GitHub https://github.com/Kinect/RoomAliveToolkit/issues/7#issuecomment-150099563 .
In RadialWobblePS.hlsl, try having main simply return input.pos.z * float4(1,1,1,1)
I've tried this and a few other approaches. Seems that x and y are returning values fine but z is always 0.
On Fri, Oct 23, 2015 at 5:51 AM, Andy Wilson notifications@github.com wrote:
In RadialWobblePS.hlsl, try having main simply return input.pos.z * float4(1,1,1,1)
— Reply to this email directly or view it on GitHub https://github.com/Kinect/RoomAliveToolkit/issues/7#issuecomment-150562164 .
I'm sorry ixikos, I think I lead you astray. RadialWobblePS is a screen space shader, so it's no surprise that the pos data is nonsensical. They are never assigned!
Here is something more like the right approach: Leave the radial wobble effect enabled in the sample.
At line 360 in ProjectionMappingSample.cs, change
passThroughShader.Render(deviceContext, filteredUserViewSRV, userViewForm.renderTargetView);
to
passThroughShader.Render(deviceContext, userViewSRV, userViewForm.renderTargetView);
This just selects the what is usually the input to the RadialWobble shader to be the final userView.
Then change DepthAndColorPS.hlsl to simply return the depth, scaled by a bit:
return input.depth/10 * float4(1,1,1,1);
Note that CalibrateEnsemble uses DepthAndColorPS, so you will be changing the way meshes are rendered. Really, once you have an idea of what you are doing (sorry again for slowing you down) the thing to do is to make your own .hlsl for this effect, create your own shader class like the others and make it a another mode in the sample.
Honestly, rendering color in this way is a pretty good way to check the quality of the depth map and calibration, and is kind of fun besides. I was thinking of including it as a mode in the sample.
Thank you, this is fantastic, got it working. I need to play around with this in a larger space but the idea is to do realtime projection mapping onto people and this is a great start.
On Mon, Oct 26, 2015 at 10:08 AM, Andy Wilson notifications@github.com wrote:
I'm sorry ixikos, I think I lead you astray. RadialWobblePS is a screen space shader, so it's no surprise that the pos data is nonsensical. They are never assigned!
Here is something more like the right approach: Leave the radial wobble effect enabled in the sample.
At line 360 in ProjectionMappingSample.cs, change
passThroughShader.Render(deviceContext, filteredUserViewSRV, userViewForm.renderTargetView);
to
passThroughShader.Render(deviceContext, userViewSRV, userViewForm.renderTargetView);
This just selects the what is usually the input to the RadialWobble shader to be the final userView.
Then change DepthAndColorPS.hlsl to simply return the depth, scaled by a bit:
return input.depth/10 * float4(1,1,1,1);
Note that CalibrateEnsemble uses DepthAndColorPS, so you will be changing the way meshes are rendered. Really, once you have an idea of what you are doing (sorry again for slowing you down) the thing to do is to make your own .hlsl for this effect, create your own shader class like the others and make it a another mode in the sample.
Honestly, rendering color in this way is a pretty good way to check the quality of the depth map and calibration, and is kind of fun besides. I was thinking of including it as a mode in the sample.
— Reply to this email directly or view it on GitHub https://github.com/Kinect/RoomAliveToolkit/issues/7#issuecomment-151211598 .
Place 1: https://github.com/Kinect/RoomAliveToolkit/blob/master/ProCamCalibration/ProCamEnsembleCalibration/Calibrate.cs#L857
Call Stack: at RoomAliveToolkit.ConsoleRedirection.Write(Object value) at System.IO.TextWriter.SyncTextWriter.Write(Object value) at RoomAliveToolkit.ProjectorCameraEnsemble.CalibrateProjectorGroups(String directory) at RoomAliveToolkit.MainForm.Solve() at System.Threading.ExecutionContext.RunInternal(ExecutionContext executionContext, ContextCallback callback, Object state, Boolean preserveSyncCtx) at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state, Boolean preserveSyncCtx) at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state) at System.Threading.ThreadHelper.ThreadStart()
Inside projector.calibrationPointSets[camera].pose, mbymWorkMatrix1, squareWorkMatrix1, workColumn1, and workIndx1 are null.
Assume the above error is fixed with the following code:
if (projector.calibrationPointSets[camera].pose != null) { Console.Write(projector.calibrationPointSets[camera].pose); }
Another null pointer exception occurs at place 2:
https://github.com/Kinect/RoomAliveToolkit/blob/master/ProCamCalibration/ProCamEnsembleCalibration/Matrix.cs#L430
A is null.
*Edit: Looks like projector.calibrationPointSets[fixedCamera].pose is null at https://github.com/Kinect/RoomAliveToolkit/blob/master/ProCamCalibration/ProCamEnsembleCalibration/Calibrate.cs#L902
Call stack: at RoomAliveToolkit.Matrix.ToMathNet(Matrix A) in h:\RoomAliveToolkit\ProCamCalibration\ProCamEnsembleCalibration\Matrix.cs:line 430 at RoomAliveToolkit.Matrix.Inverse(Matrix A) in h:\RoomAliveToolkit\ProCamCalibration\ProCamEnsembleCalibration\Matrix.cs:line 455 at RoomAliveToolkit.ProjectorCameraEnsemble.UnifyPose() in h:\RoomAliveToolkit\ProCamCalibration\ProCamEnsembleCalibration\Calibrate.cs:line 907 at RoomAliveToolkit.ProjectorCameraEnsemble.OptimizePose() in h:\RoomAliveToolkit\ProCamCalibration\ProCamEnsembleCalibration\Calibrate.cs:line 936 at RoomAliveToolkit.MainForm.Solve() in h:\RoomAliveToolkit\ProCamCalibration\CalibrateEnsemble\MainForm.cs:line 997 at System.Threading.ExecutionContext.RunInternal(ExecutionContext executionContext, ContextCallback callback, Object state, Boolean preserveSyncCtx) at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state, Boolean preserveSyncCtx) at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state) at System.Threading.ThreadHelper.ThreadStart()
Any help would be much appreciated.
Thanks.