facebookresearch / detectron2

Detectron2 is a platform for object detection, segmentation and other visual recognition tasks.
https://detectron2.readthedocs.io/en/latest/
Apache License 2.0
30.48k stars 7.48k forks source link

How to get coordinates of body parts from DensePose? #165

Open talatccan opened 5 years ago

talatccan commented 5 years ago

Hi,

I used trained from model on my input image and it generated output image with the following code.

!python apply_net.py show configs/densepose_rcnn_R_50_FPN_s1x.yaml model_final_dd99d2.pkl image.png dp_contour,bbox -v

My question is that how can i get coordinates of body parts? For example hands or face coordinates?

tfederico commented 5 years ago

It would be amazing if you could include VideoPose3D in Detectron2

vkhalidov commented 5 years ago

@talatccan, the best way would be to dump those using python apply_net.py dump ... and then use the resulting dump file to extract the IUV coordinates from DensePoseResult instances (using DensePoseResult.decode_png_data). This will give you arrays of shape (3, H, W) of type uint8, where the first plane corresponds to the I component (i.e. body part label), the second plane are scaled U coordinates and the third plane are scaled V coordinates. You can check how this is done in the visualization code, for example

talatccan commented 5 years ago

Thank you for reply.

I get output.pkl and load it with pickle.load. I get following result:

[{'file_name': 'image.png', 'image': array([[[255, 255, 255],
          [255, 255, 255],
          [255, 255, 255],
          ...,
          [255, 255, 255],
          [255, 255, 255],
          [255, 255, 255]],

         [[255, 255, 255],
          [255, 255, 255],
          [255, 255, 255],
          ...,
          [255, 255, 255],
          [255, 255, 255],
          [255, 255, 255]],

         [[255, 255, 255],
          [255, 255, 255],
          [255, 255, 255],
          ...,
          [255, 255, 255],
          [255, 255, 255],
          [255, 255, 255]],

         ...,

         [[255, 255, 255],
          [255, 255, 255],
          [255, 255, 255],
          ...,
          [255, 255, 255],
          [255, 255, 255],
          [255, 255, 255]],

         [[255, 255, 255],
          [255, 255, 255],
          [255, 255, 255],
          ...,
          [255, 255, 255],
          [255, 255, 255],
          [255, 255, 255]],

         [[255, 255, 255],
          [255, 255, 255],
          [255, 255, 255],
          ...,
          [255, 255, 255],
          [255, 255, 255],
          [255, 255, 255]]], dtype=uint8), 'instances': Instances(num_instances=1, image_height=1152, image_width=2048, fields=[pred_boxes = Boxes(tensor([[ 681.7524,   54.6285, 1326.3542, 1116.3512]], device='cuda:0')), scores = tensor([0.9995], device='cuda:0'), pred_classes = tensor([0], device='cuda:0'), pred_densepose = DensePoseOutput S [1, 15, 56, 56], I [1, 25, 56, 56], U [1, 25, 56, 56], V [1, 25, 56, 56], ])}]

I just coldn't figure out that how can i extract IUV coordinates using DensePoseResult.decode_png_data. What should i give to decode_png_data as input from pickle?

vkhalidov commented 5 years ago

@talatccan could you please update from master? there was a recent commit that introduced more convenient and space-efficient dumping. With that commit you'll be able to follow the guidelines I've given previously.

talatccan commented 5 years ago

I've updated files and i get following error when i loaded pickle. I didn't understand what is relation between detectron2 and pickle.load.

ModuleNotFoundError                       Traceback (most recent call last)
<ipython-input-21-060c07eb67ee> in <module>()
      1 import pickle
      2 with open('/content/drive/My Drive/Image_Processing/GAN/detectron2/projects/DensePose/dump.pkl', 'rb') as f:
----> 3   test = pickle.load(f)

1 frames
/content/drive/My Drive/Image_Processing/GAN/detectron2/projects/DensePose/densepose/dataset.py in <module>()
      2 import os
      3 
----> 4 from detectron2.data import DatasetCatalog, MetadataCatalog
      5 from detectron2.data.datasets import load_coco_json
      6 

ModuleNotFoundError: No module named 'detectron2'
shapovalov commented 5 years ago

@talatccan I think the new pickle contains custom data types, hence it tries to import detectron2. Did you install detectron2 to your environment according to https://github.com/facebookresearch/detectron2/blob/master/INSTALL.md ?

@vkhalidov I have a related problem when trying to load the pickle. It wants to import module densepose which cannot be installed as far as I see. What is the intended usage? Hacking the PYTHONPATH? Thanks.

talatccan commented 5 years ago

@talatccan I think the new pickle contains custom data types, hence it tries to import detectron2. Did you install detectron2 to your environment according to https://github.com/facebookresearch/detectron2/blob/master/INSTALL.md ?

@vkhalidov I have a related problem when trying to load the pickle. It wants to import module densepose which cannot be installed as far as I see. What is the intended usage? Hacking the PYTHONPATH? Thanks.

Yes, i did clean install for detectron2 but i still get same error.

vkhalidov commented 5 years ago

Densepose results are saved as an instance of DensePoseResult, hence the dependency on detectron2. I think I'll change this behavior to avoid problems with python paths and mimic instances_to_json method which is used to dump inference results in COCO evaluation. DensePoseResults is just a set of strings anyway, so the dependency is not required.

carlossawyerr commented 5 years ago

Did anyone figure this out? How does one match body parts to data in the IUV array?

frankkim1108 commented 4 years ago

Does anyone know how to extract 3D coordinates when you dump from apply_net.py?

python apply_net.py dump configs/densepose_rcnn_R_50_FPN_s1x.yaml DensePose_ResNet50_FPN_s1x-e2e.pkl "Jongwook_F2.jpg" --output results.pkl -v

I opened the results.pkl file and found a DensePoseResult class. However, I have no idea how to get the coordinates from this class. Does anyone have any example codes?

ghost commented 4 years ago

@talatccan @carlossawyerr @frankkim1108 I reaccomplished the functions of IUV extraction from image and IUV projection on SMPL model, and it worked well in Detectron 2. Check this out.

ghost commented 4 years ago

@shapovalov You're right. The result.pkl contains custom data types (i.e. ), thus it will import DensePose which is the subsub folder of detectron2_repo. To address this issue, you can import it in your Python code , just like this:

import sys
sys.path.append("/content/detectron2_repo/projects/DensePose/")

This should work. And Check this out.

frankkim1108 commented 4 years ago

@linjunyu Thank you so much... I read your tutorial and it is awesome!!

frankkim1108 commented 4 years ago

@linjunyu I was reading your guide and I had trouble with

  1. Do I have to use google.colab? I had errors with drive.mount('/content/drive')

    from google.colab import drive
    drive.mount('/content/drive')
  2. My second problems is ... UV_Processed.mat, UV_symmetry_transforms.mat what are these files? Does it come out as a result? I can't find these files. ALP_UV = loadmat( '/content/drive/My Drive/UV_Processed.mat' ) # Use your own path UV_symmetry_filename = '/content/drive/My Drive/UV_symmetry_transforms.mat' # Use your own path

  3. My third problems is, I saw that you've used SMPL. However, I went to their website and only gave download files form SMPL python 2.7. Where can I use SMPL Python 3.7?

Thank you for you time

ghost commented 4 years ago
  1. If you have configured the Detectron2 environment in your own computer, you can ignore this google drive mount code.
  2. The two mat files have been updated and uploaded to my repository.
  3. I use SMPL in Python 3.6, and I think SMPL will work fine in Python 3.7 as well. The only thing you should notice is that the Pickle Lib is different between Python 2 and Python3. The code of loading SMPL pickle file(Python 2 pickle saved) in Python 3 should be like:

    with open('/content/drive/My Drive/smpl2/models/basicmodel_m_lbs_10_207_0_v1.0.0.pkl', 'rb') as f:
    u = pickle._Unpickler(f)
    u.encoding = 'latin1'
    p = u.load()
    data = p
    
    Vertices = data['v_template']  ##  Loaded vertices of size (6890, 3)
    X,Y,Z = [Vertices[:,0], Vertices[:,1],Vertices[:,2]]

    Note that I use Unpickler to load pkl file which is saved by Python 2 pickle lib, then perform encoding, and finally use p=u.load() to obtain it in Python3. This should work.

carlossawyerr commented 4 years ago

That's really awesome! Thanks a lot, Jerry.

frankkim1108 commented 4 years ago

@linjunyu Thank you for updating your repository. I have a question about SMPL. Where did you get the SMPL file for python 3? The official SMPLwebsite only provide SMPL file for Python 2.

Where can I get this file? basicmodel_m_lbs_10_207_0_v1.0.0.pkl

ghost commented 4 years ago

@frankkim1108 This file basicmodel_m_lbs_10_207_0_v1.0.0.pkl is the pickle file downloaded from the official SMPL website for Python 2.7. Just go to the website, sign up and download it. Then implement the code I mentioned above to load SMPL.

frankkim1108 commented 4 years ago

@linjunyu It finally works. Do you have any info about getting the coordinates from human contour? I want to get the coordinates of the contour. Is there any functions in detectron2 or denspose where I can get the coordinates??

carlossawyerr commented 4 years ago

I have a similar request in that i'm attempting to capture features like arm length, shoulder width etc.

ghost commented 4 years ago

This is beyond this issue, but i can provide some tricks. @frankkim1108 As you can get the INDS array, so just assign all the human part to 1, and background to 0, then use cv2.findContours() of cv2 lib to get the contour. @carlossawyerr One possible solution is to perform skeleton-based human parsing, and use the skeleton length as the length of each human part.

frankkim1108 commented 4 years ago

@linjunyu Thank you for your response. What do you mean my assigning the coordinates? From your example, I understood that we can get the INDS array. However, I have trouble understanding with assigning the coordinates 1 and 0. How do we know just by looking at the coordinates that it belongs to the human part or the background part?

ghost commented 4 years ago

@frankkim1108 Use this code C = np.where(INDS >= 1) You can get the human coordinates (Human is separated into 24 parts by Densepose, the "1-24" label coordinates are extracted), and C = np.where(INDS == 0) which is the background coordinates.

frankkim1108 commented 4 years ago

@linjunyu Thank you so much for helping me out for the past few days. I appreciate it a lot!!!

frankkim1108 commented 4 years ago

@linjunyu Hey I came across with a challenge to measure body sizes such as chest girth or hip girth. Would it be possible to get the girth from the 3D model? Or is there a better way to measure body sizes? Do you have any advice?

frankkim1108 commented 4 years ago

@carlossawyerr Hi, did you find any solutions to get body measurements?

ghost commented 4 years ago

@frankkim1108 Hi, sorry for late reply. I cannot give you a detailed method, but combination of camera calibration for real-world distance measurement and personalized human modeling for relative human part distance measurement is needed. It seems to be difficult for Densepose to model personalized 3D human.

no-1ne commented 4 years ago

One way to measure it would be to ask user to place a standard size item like a 1 litre bottle near there feet, since the standard bottle size is known, we can forget extrapolate the dimensions of bodyparts in the real world

frankkim1108 commented 4 years ago

@startupgurukul Thank you for your response. Do you have some ideas to measure chest girth or shoulder girth??

no-1ne commented 4 years ago

@frankkim1108 once pixel to real world mapping is obtained, you can ask users to strech their arms and turn around infront of camera (with bottle between their legs), once you have the video, get the countours(dp_contour param to apply_net.py) like in the image below and calculate the distance between pixel of your interest and multiply with real world mapping obtained thanks to the object between the legs whose dimesions are already known(example a 1litre standard water bottle)

image

no-1ne commented 4 years ago

More thoughts with a better user experience, without needing a bottle for reference & just with the user.

You can ask user to first measure their thumb or one of their finger by giving a fixed scale image (irrespective of screen size 10 cms should be 10 cms) and ask them to input the number which can be used as a real world reference.

Then instead of video you can ask user to give you to images like the front view and side view.

Then you can use detectron2 densepose contours to achieve what ever you are looking to achieve.

All the best and good luck from India :)

On Tue, Dec 17, 2019, 7:00 AM Kim JongWook notifications@github.com wrote:

@startupgurukul https://github.com/startupgurukul Thank you for your response. Do you have some ideas to measure chest girth or shoulder girth??

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/facebookresearch/detectron2/issues/165?email_source=notifications&email_token=ACUEFOKEMLY4EJB7PFY2YELQZATTDA5CNFSM4JETZZ7KYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEHA2FYY#issuecomment-566338275, or unsubscribe https://github.com/notifications/unsubscribe-auth/ACUEFOIEXRRBIS5J46EIH5TQZATTDANCNFSM4JETZZ7A .

frankkim1108 commented 4 years ago

@startupgurukul Thanks for the detailed explaination. I have pulled out the coordinates successfully but how do I know which ones to use? For example, if I wanted to measure the shoulder width, how do I know which coordinates are shoulder coordinates? Thank you for your time!!

no-1ne commented 4 years ago

If you notice in the repo their is a animated image where in one screen, there is a stick like thing, i.e the pose it gives the coordinates of keypoints like shoulders, wrists. Pls see if that helps.

66535560-d3422200-eace-11e9-9123-5535d469db19

mathpopo commented 4 years ago

@linjunyu https://github.com/linjunyu/Detectron2-Densepose-IUV2XYZ very thank you for your IUV->XYZ,i put it in Detetron2 ,work well ,great job. but have a issue,it only run in matplotlib2.2.2,i use 3.2 cannot work

matplotlib Navigation Bar error 'FigureCanvasTkAgg' object has no attribute 'manager' in tkinter

https://stackoverflow.com/questions/56450918/matplotlib-navigation-bar-error-figurecanvastkagg-object-has-no-attribute-man

2 1

araufdogan commented 4 years ago

@frankkim1108 @linjunyu Hello, How did you find contours after use np.where(INDS >= 1)

INDS = iuv_arr[0, :, :]
C = np.where(INDS >= 1)
cv2.findContours(C, cv2.RETR_TREE, cv2.CHAIN_APPROX_NONE)

findContours line is giving error TypeError: image is not a numerical tuple. How can I convert C variable for the use with cv2?

Solution:

mask = np.ascontiguousarray(C)
_, contours, _ = cv2.findContours(mask.astype("uint8"), cv2.RETR_CCOMP, cv2.CHAIN_APPROX_NONE)
mathuse commented 3 years ago

@talatccan @carlossawyerr @frankkim1108 I reaccomplished the functions of IUV extraction from image and IUV projection on SMPL model, and it worked well in Detectron 2. Check this out. how to get the file densepose_rcnn_R_50_FPN_s1x.pkl?

mathuse commented 3 years ago

Hi,

I used trained from model on my input image and it generated output image with the following code.

!python apply_net.py show configs/densepose_rcnn_R_50_FPN_s1x.yaml model_final_dd99d2.pkl image.png dp_contour,bbox -v

My question is that how can i get coordinates of body parts? For example hands or face coordinates? can you tell me how to get the file densepose_rcnn_R_50_FPN_s1x.pkl ?