Closed carstenschwede closed 7 years ago
Hi carstenschwede, Thanks for pointing out the problems. Your understanding is correct. We find our system not performing well in real environment too. However, it can be improved if you add some data augmentation during training as https://github.com/jsupancic/deep_hand_pose does.
If I understand correctly, Oberweger et al. also use wrist joint as center of the image during training (their published code data/importers.py line 643). Actually for the number we reported in our paper, we use exact their code for data processing and then convert it to h5 format.
Hi tenstep, I only find a rotation precessing at https://github.com/jsupancic/deep_hand_pose. Was the data augmentation as you describe above?
According to your paper modeling the hand joint, I thought the rotation augment may not affect your training procedure, only affect the hand joint location training.
Regards!
Hi Minotaur-CN,
As far as I understand the preprocessing part, the procedure is
However, hands in the test depth images [0,772,1150,1350,1739].png won't be detected correctly once the image is being flipped or rotated.
Are there some more steps involved in the preprocessing of the depth images (e.g. rotation, left/right hand orientation)?
*) Oberweger et al. (2015) used a center of mass approach to detect the center of hand, whereas in https://github.com/tenstep/DeepModel/issues/4 the wrist position is mentioned as being the center, could you clarify on that?