Closed wine3603 closed 5 years ago
你好, 我是想直接应用你训练好的模型,分割一下我自己的数据。 我只有遮挡的人体图片可以吗? test.py里需要输入数据库的annotations,如过我只想应用你的算法做online的单张或者视频的分割应该怎么做呢? 可不可以像openpose提供的API那样应用于本地图片或者webcam呢?
Thanks for your interests.
If you don't want to go over the code, the fastest way of doing it is to generate a json file with the same format as the coco data json file, and use the test.py to test it.
Note that our method takes both image and keypoints as the input, so human keypoints should be contained in the json file. You may need to first detect the person keypoints using some other methods, such as the github repos [openpose], [pose-ae-train], [alphapose] etc.
Thanks for your interests.
If you don't want to go over the code, the fastest way of doing it is to generate a json file with the same format as the coco data json file, and use the test.py to test it.
Note that our method takes both image and keypoints as the input, so human keypoints should be contained in the json file. You may need to first detect the person keypoints using some other methods, such as the github repos [openpose], [pose-ae-train], [alphapose] etc.
您好,我有两个问题,提前感谢您的时间。
It's a bug in the code, because actually mask is never needed in that function.
I just updated the repo.
It's a bug in the code, because actually mask is never needed in that function.
I just updated the repo.
Thank you very much
Thanks for your interests.
If you don't want to go over the code, the fastest way of doing it is to generate a json file with the same format as the coco data json file, and use the test.py to test it.
Note that our method takes both image and keypoints as the input, so human keypoints should be contained in the json file. You may need to first detect the person keypoints using some other methods, such as the github repos [openpose], [pose-ae-train], [alphapose] etc.
谢谢回复。也就是说在测试时,pose也是必须和图片一起输入的吗? 我之前理解是pose是作为训练模型时候的输入,应用时候只要图片就好了,看来我理解错了。
Thanks for your interests. If you don't want to go over the code, the fastest way of doing it is to generate a json file with the same format as the coco data json file, and use the test.py to test it. Note that our method takes both image and keypoints as the input, so human keypoints should be contained in the json file. You may need to first detect the person keypoints using some other methods, such as the github repos [openpose], [pose-ae-train], [alphapose] etc.
谢谢回复。也就是说在测试时,pose也是必须和图片一起输入的吗? 我之前理解是pose是作为训练模型时候的输入,应用时候只要图片就好了,看来我理解错了。
yes.
HI man, I installed pose2seg and it runs perfectly on your OCHuman, How can I test it on a single image using your trained model?
BTW, I cannot open the link of visualize_cluster.ipynb . Thanks.