I am working on a different dataset then the one used in this implementation which is RWTH-Boston-104. Videos are grayscale but skeletal poses from OpenPose are in RGB and so the frame image is of dimension (312, 336, 3) . This is giving me errors while using 3DposeEstimator suggested. Can you please help me with any preprocessing step to prepare my frame for input to the 3DposeEstimator. I am very new to deep learning and computer vision domain and any help will be greatly appreciated. Looking forward to your reply.
Hi,
I am working on a different dataset then the one used in this implementation which is RWTH-Boston-104. Videos are grayscale but skeletal poses from OpenPose are in RGB and so the frame image is of dimension (312, 336, 3) . This is giving me errors while using 3DposeEstimator suggested. Can you please help me with any preprocessing step to prepare my frame for input to the 3DposeEstimator. I am very new to deep learning and computer vision domain and any help will be greatly appreciated. Looking forward to your reply.
Thanks, Divya C