idiap / residual_pose

Residual Pose: A Decoupled Approach for Depth-based 3D Human Pose Estimation
GNU General Public License v3.0
32 stars 6 forks source link

Test model with custom data #2

Open yicheng6o6 opened 2 years ago

yicheng6o6 commented 2 years ago

Hi, Thank you for the great work.

I have depth images and I'm wondering how to convert it into .npy or .mat file? I can use some tools transform depth images to .mat files, but when I test model with these data, I got KeyError: 'depth' information.

I would be grateful if you could give me some suggestions to achieve it.

Best wishes, Yicheng

af-doom commented 2 years ago

Hi, Thank you for the great work.

I have depth images and I'm wondering how to convert it into .npy or .mat file? I can use some tools transform depth images to .mat files, but when I test model with these data, I got KeyError: 'depth' information.

I would be grateful if you could give me some suggestions to achieve it.

Best wishes, Yicheng

hello bron i also have this question

legan78 commented 2 years ago

Hi,

The load image utils expects a dictionary when file format is given in .mat. A .mat file also stores data in a dictionary style. You can create the dictionary with img_dict = {'depth': my_depth_image_array} and save them with scipy (.mat). See here how the different image formats are loaded.

Hope it helps

af-doom commented 2 years ago

Hi,

The load image utils expects a dictionary when file format is given in .mat. A .mat file also stores data in a dictionary style. You can create the dictionary with img_dict = {'depth': my_depth_image_array} and save them with scipy (.mat). See here how the different image formats are loaded.

Hope it helps

thank you very much for your reply,but i have a new problem I use this commmand:

python main.py --config_file config/itop_config_file.json \ --image_sample img_samples/itop/tset.png \ --output_path output_dir Traceback (most recent call last): File "main.py", line 209, in detections, canvas2d= detect_2d_pose(img_depth, hg_model, config) File "/usr/local/lib/python3.7/dist-packages/torch/autograd/grad_mode.py", line 49, in decorate_no_grad return func(*args, kwargs) File "main.py", line 104, in detect_2d_pose output, feats= hg_model(img_inputs) File "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py", line 532, in call result = self.forward(*input, *kwargs) File "/home/ubuntu/MyFiles/residual_pose-master/HourGlass.py", line 268, in forward in_features= self.front(x) File "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py", line 532, in call result = self.forward(input, kwargs) File "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/container.py", line 100, in forward input = module(input) File "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py", line 532, in call result = self.forward(*input, **kwargs) File "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/conv.py", line 345, in forward return self.conv2d_forward(input, self.weight) File "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/conv.py", line 342, in conv2d_forward self.padding, self.dilation, self.groups) RuntimeError: Expected 4-dimensional input for 4-dimensional weight 64 1 7 7, but got 5-dimensional input of size [1, 1, 256, 320, 3] instead

legan78 commented 2 years ago

Hi, Input depth image is expected to have shape [1, 1, width, height]

yicheng6o6 commented 2 years ago

Hi, @af-doom, @legan78, thank you for your reply:)

I tested my depth images and I met this error:

Traceback (most recent call last):
  File "main.py", line 204, in <module>
    img_color= cv2.cvtColor(img_color, cv2.COLOR_GRAY2BGR)
cv2.error: OpenCV(4.5.3) /tmp/pip-req-build-xw6jtoah/opencv/modules/imgproc/src/color.simd_helpers.hpp:92: error: (-2:Unspecified error) in function 'cv::impl::{anonymous}::CvtHelper<VScn, VDcn, VDepth, sizePolicy>::CvtHelper(cv::InputArray, cv::OutputArray, int) [with VScn = cv::impl::{anonymous}::Set<1>; VDcn = cv::impl::{anonymous}::Set<3, 4>; VDepth = cv::impl::{anonymous}::Set<0, 2, 5>; cv::impl::{anonymous}::SizePolicy sizePolicy = cv::impl::<unnamed>::NONE; cv::InputArray = const cv::_InputArray&; cv::OutputArray = const cv::_OutputArray&]'
> Invalid number of channels in input image:
>     'VScn::contains(scn)'
> where
>     'scn' is 3

Besides, I'm not sure about how to convert my depth images into [1, 1, width, height]. May I have your suggestions to fix these problems? Any help is much appreciated!

yicheng6o6 commented 2 years ago

Hi, @legan78, now I can test my own depth images.

One of the results like the following figure: canvas_3d_regressed And I use this command to test: python main.py --config_file config/itop_config_file.json --image_sample /home/lk3696/residual_pose/depth85.npy --output_path output_dir5

My depth image size: 320x240 and I convert .png file into .npy.

May I have your suggestions to get better results? Any help is much appreciated!