Official Torch7 implementation of "V2V-PoseNet: Voxel-to-Voxel Prediction Network for Accurate 3D Hand and Human Pose Estimation from a Single Depth Map", CVPR 2018
What's your batchSize values in those different datasets(e.g. msra, nyu) experiments? I see batchSz=1 in config.lua, is that applied to all your experiments?
I trained few models on MSRA hand dataset and my own smaller dataset(not about hand) with V2V-PoseNet-pytorch, I found that the smaller batchSizes often achieve much better results. And the models perform bad if batchSize upto 32 or 64. Have you faced similar situation in your trainings(with your own torch7 implementation)?
Hi,
What's your batchSize values in those different datasets(e.g. msra, nyu) experiments? I see batchSz=1 in config.lua, is that applied to all your experiments?
I trained few models on MSRA hand dataset and my own smaller dataset(not about hand) with V2V-PoseNet-pytorch, I found that the smaller batchSizes often achieve much better results. And the models perform bad if batchSize upto 32 or 64. Have you faced similar situation in your trainings(with your own torch7 implementation)?
Thanks.