Closed flugenheimer closed 5 years ago
uint16 should be fine, but your number are way off. I loaded the depth files using the sixd_toolkit functions defined in inout.py, i.e.:
def load_depth2(path):
d = scipy.misc.imread(path)
d = d.astype(np.float32)
return d
The depth images should be in float32 and mm scale in the end.
ah that solved one issue, however i am running into another, as the depth image in meshrenderer_phong.py render_many function is all zero, which means that ys and xs will be [ ], which crashes when the boundary boxes are calculated below
glNamedFramebufferReadBuffer(self._fbo_depth.id, GL_COLOR_ATTACHMENT1)
depth_flipped = glReadPixels(0, 0, W, H, GL_RED, GL_FLOAT).reshape(H,W)
depth = np.flipud(depth_flipped).copy()
ys, xs = np.nonzero(depth > 0)
obj_bb = misc.calc_2d_bbox(xs, ys, (W,H))
bbs.append(obj_bb)
giving the following ValueError:
ValueError: zero-size array to reduction operation minimum which has no identity
Changing the depth_scale in the aae_retina_webcam.cfg file results in everything working, however I see no visual difference in the results of using ICP compared to not using it.
Is there a way adapt it to support a 16bit depth image. If I use it directly it does not work, as seen by the print out in icp.py in the icp_refinement method:
If i just load it as 8bit the numbers look closer i guess:
However when I inspect the 8bit image i do not get a depth difference in my object due to the lower depth resolution. Im therefore pretty sure i need the 16 bit depth.
I am currently just testing by loading image files with opencv, so they load as uint8 and uint16.