liruilong940607 / Pose2Seg

Code for the paper "Pose2Seg: Detection Free Human Instance Segmentation" @ CVPR2019.
http://www.liruilong.cn/projects/pose2seg/index.html
MIT License
532 stars 136 forks source link

How to run the test on a single image ? #7

Closed wine3603 closed 5 years ago

wine3603 commented 5 years ago

HI man, I installed pose2seg and it runs perfectly on your OCHuman, How can I test it on a single image using your trained model?

BTW, I cannot open the link of visualize_cluster.ipynb . Thanks.

wine3603 commented 5 years ago

你好, 我是想直接应用你训练好的模型,分割一下我自己的数据。  我只有遮挡的人体图片可以吗? test.py里需要输入数据库的annotations,如过我只想应用你的算法做online的单张或者视频的分割应该怎么做呢? 可不可以像openpose提供的API那样应用于本地图片或者webcam呢?

liruilong940607 commented 5 years ago

Thanks for your interests.

If you don't want to go over the code, the fastest way of doing it is to generate a json file with the same format as the coco data json file, and use the test.py to test it.

Note that our method takes both image and keypoints as the input, so human keypoints should be contained in the json file. You may need to first detect the person keypoints using some other methods, such as the github repos [openpose], [pose-ae-train], [alphapose] etc.

taoshiqian commented 5 years ago

Thanks for your interests.

If you don't want to go over the code, the fastest way of doing it is to generate a json file with the same format as the coco data json file, and use the test.py to test it.

Note that our method takes both image and keypoints as the input, so human keypoints should be contained in the json file. You may need to first detect the person keypoints using some other methods, such as the github repos [openpose], [pose-ae-train], [alphapose] etc.

您好,我有两个问题,提前感谢您的时间。

  1. 我使用model.forward需要三个参数(batchimgs, batchkpts, batchmasks=None),不放入batchmasks会出现报错,怎么解决比较好呢: for i, (matrix, kpts, masks) in enumerate(zip(self.inputMatrixs, self.batchkpts, self.batchmasks)): TypeError: zip argument #3 must support iteration 2.加入Mask信息可以使得模型表现更好吗?那我把MaskRCNN生成Mask作为参数放进去可行吗?
liruilong940607 commented 5 years ago

It's a bug in the code, because actually mask is never needed in that function.

I just updated the repo.

taoshiqian commented 5 years ago

It's a bug in the code, because actually mask is never needed in that function.

I just updated the repo.

Thank you very much

wine3603 commented 5 years ago

Thanks for your interests.

If you don't want to go over the code, the fastest way of doing it is to generate a json file with the same format as the coco data json file, and use the test.py to test it.

Note that our method takes both image and keypoints as the input, so human keypoints should be contained in the json file. You may need to first detect the person keypoints using some other methods, such as the github repos [openpose], [pose-ae-train], [alphapose] etc.

谢谢回复。也就是说在测试时,pose也是必须和图片一起输入的吗? 我之前理解是pose是作为训练模型时候的输入,应用时候只要图片就好了,看来我理解错了。

liruilong940607 commented 5 years ago

Thanks for your interests. If you don't want to go over the code, the fastest way of doing it is to generate a json file with the same format as the coco data json file, and use the test.py to test it. Note that our method takes both image and keypoints as the input, so human keypoints should be contained in the json file. You may need to first detect the person keypoints using some other methods, such as the github repos [openpose], [pose-ae-train], [alphapose] etc.

谢谢回复。也就是说在测试时,pose也是必须和图片一起输入的吗? 我之前理解是pose是作为训练模型时候的输入,应用时候只要图片就好了,看来我理解错了。

yes.