Closed xyz2357 closed 6 years ago
Oh, I haven't added the resize to 256x256 code in demo.py. You can do it yourself when testing your own image. Sorry for that.
Thanks, it solved. BTW, anyway to output the 3D points into files?
print pred
and copy it to a file.
pred
seems to be a 2d result.
Is that a good idea to uncomment
#print 'show3D', c, points
in debugger.py, line 12?
https://github.com/xingyizhou/pytorch-pose-hg-3d/blob/84ad44e7a8aa15307b9a371ce85b3dee8d5ad2dc/src/utils/debugger.py#L12
Are those the predicted 3d positions of all the key points?
Oh, yes you can. Or just simply print np.concatenate([pred, (reg + 1) / 2. * 256]
.
OK, thanks!
Sorry, one more thing: I've got the plot result of the reshaped graph. So how can I convert the plot back, so that I can get the plots of the origin picture? Say, I can convert x back by new_x_val = x_val * origin_x_size / 256; similar to y; but how can I convert z-axis back?
There is no absolute z
, you can use (reg + 1) / 2. * 256
to convert z
with the same aspect ratio as xy
in the image coordinate system (with weak-perspective camera model). A very detailed explanation of why/how to use the coordinate system calibration can be found in Section 3.2 of https://arxiv.org/pdf/1803.09331.pdf .
OK, thanks!
I put the pretrained model and a test picture in /src/, and run
python demo.py -demo test_1.png
The std error is