leoxiaobin / deep-high-resolution-net.pytorch

The project is an official implementation of our CVPR2019 paper "Deep High-Resolution Representation Learning for Human Pose Estimation"
https://jingdongwang2017.github.io/Projects/HRNet/PoseEstimation.html
MIT License
4.34k stars 914 forks source link

about data #68

Open mrzhangzizhen123 opened 5 years ago

mrzhangzizhen123 commented 5 years ago

Can this code use other data?For example, medical image, please give some specific Suggestions?thank you.

frankite commented 5 years ago

I have same question ,have you solved it ? thanks !

gireek commented 5 years ago

I have the same question. Please share how did you get keypoints on your own data.

chaurasiat commented 5 years ago

@mrzhangzizhen123 , I have same question..have you solved it?

wanghao14 commented 5 years ago

Hi, I had participated a ICCV2019 workshop&challenge and used this code for the tiger pose estimation task. After making some minor modifications to the original code, I got the 2nd place in the final leaderboard. The modified code has been publiced and I hope this will help you apply the HRNet code to your own data.

welleast commented 5 years ago

Big congratulations to wanghao14!

wanghao14 commented 5 years ago

@welleast Thanks for you encouragement. The result depends entirey on the robustness and state-of-the-art performance of your great work for pose estimation.

ZP-Guo commented 5 years ago

How I can train HRNet with my own dataset as follow. Maybe some steps is not clear enough, but I think you can reference it. I will be pleased if I can help

  1. Change your own dataset format into the COCO's, and you need to get bbox of every human in your images.
  2. "mkdir" ./data/xxx/annotaions, /data/xxx/iamges, /data/xxx/person_detections_results and put your data into these "dir"s like ./data/coco
  3. Copy ./lib/dataset/coco.py to ./lib/dataset/xxx.py.
  4. Modify ./lib.dataset/xxx.py: def image_path_from_index(self,index) according to your format of images.
  5. Copy ./experiments/coco to ./experiments/xxx.
  6. For example, modify ./experiments/xxx/hrnet/w32_256x192_adam_lr1e-3.yaml. DATASET.DATASET:'xxx', DATASET.RROT:'./data/xxx', DATASET.TEST_SET:'val' (if you need) DATASET.TRAIN_SET:'train' (if you need) TEST.COCO_BBOX_FILE:'./data/xxx/person_detections_results/xxx_detections_person.json'.
Gokulnath31 commented 4 years ago

What tool did you use to create your own dataset?

Gokulnath31 commented 4 years ago

How I can train HRNet with my own dataset as follow. Maybe some steps is not clear enough, but I think you can reference it. I will be pleased if I can help

  1. Change your own dataset format into the COCO's, and you need to get bbox of every human in your images.
  2. "mkdir" ./data/xxx/annotaions, /data/xxx/iamges, /data/xxx/person_detections_results and put your data into these "dir"s like ./data/coco
  3. Copy ./lib/dataset/coco.py to ./lib/dataset/xxx.py.
  4. Modify ./lib.dataset/xxx.py: def image_path_from_index(self,index) according to your format of images.
  5. Copy ./experiments/coco to ./experiments/xxx.
  6. For example, modify ./experiments/xxx/hrnet/w32_256x192_adam_lr1e-3.yaml. DATASET.DATASET:'xxx', DATASET.RROT:'./data/xxx', DATASET.TEST_SET:'val' (if you need) DATASET.TRAIN_SET:'train' (if you need) TEST.COCO_BBOX_FILE:'./data/xxx/person_detections_results/xxx_detections_person.json'.

What tool did you use to create your own dataset?