I use this api to read the short throw depth map, get the depth map of the current frame of the SoftwareBitmap type, and save the data in byte[].
My code as follow:
`
SensorFrame latestShortDepthCameraFrame =
_shortDepthMediaFrameSourceGroup.GetLatestSensorFrame(
_sensorTypeResearch);
var DepthFrame = latestShortDepthCameraFrame.SoftwareBitmap;
DepthFrame = SoftwareBitmap.Convert(DepthFrame, BitmapPixelFormat.Rgba8, BitmapAlphaMode.Premultiplied);
var w = DepthFrame.PixelWidth;
var h = DepthFrame.PixelHeight;
Destroy(_pvDepthTexture);
_pvDepthTexture = new Texture2D(
w,
h,
TextureFormat.RGBA32, false);
if (bytes==null)
{
bytes = new byte[w * h * 4];
}
DepthFrame.CopyToBuffer(bytes.AsBuffer());
DepthFrame.Dispose();
_pvDepthTexture.LoadRawTextureData(bytes);
`
But when I used it to visualize it, I found that the picture was almost completely black, and I also tested the maximum value of 3 in the output byte[], which is really confusing for me as a novice.
My expectation is that the obtained depth map is consistent with the depth map obtained in the sample program Recorder. Can someone help me solve this problem?
I use this api to read the short throw depth map, get the depth map of the current frame of the SoftwareBitmap type, and save the data in byte[]. My code as follow: ` SensorFrame latestShortDepthCameraFrame = _shortDepthMediaFrameSourceGroup.GetLatestSensorFrame( _sensorTypeResearch); var DepthFrame = latestShortDepthCameraFrame.SoftwareBitmap; DepthFrame = SoftwareBitmap.Convert(DepthFrame, BitmapPixelFormat.Rgba8, BitmapAlphaMode.Premultiplied); var w = DepthFrame.PixelWidth; var h = DepthFrame.PixelHeight;
`
But when I used it to visualize it, I found that the picture was almost completely black, and I also tested the maximum value of 3 in the output byte[], which is really confusing for me as a novice. My expectation is that the obtained depth map is consistent with the depth map obtained in the sample program Recorder. Can someone help me solve this problem?