Closed trops closed 7 years ago
Try installing with this script https://github.com/IshitaTakeshi/PartsBasedDetector.jl/blob/master/deps/build.sh but replace $project_root
with the workspace.
Then run
cd deps/src/pbd
$./build/src/PartsBasedDetector1 models/mat/lsp.mat <image file>
Hi Ishita, Thank you for the reply, I believe the script worked! I needed to install a few missing libraries, but I "think" it worked. I would like to test to make sure. I am not very familiar with learning and .mat files, etc, I was just looking to find an application that would perform articulated pose estimation in order to detect joints in the human body and this seemed like a possible way to go. Is this possible? How would I go about testing your program?
On Fri, Aug 5, 2016 at 10:12 PM, Ishita Takeshi notifications@github.com wrote:
Try installing with this script https://github.com/IshitaTakeshi/ PartsBasedDetector.jl/blob/master/deps/build.sh but replace $project_root with your working directory.
— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/IshitaTakeshi/PartsBasedDetector/issues/1#issuecomment-238000597, or mute the thread https://github.com/notifications/unsubscribe-auth/ABoV9Cr1X07NBQfBe1s20C7VZjVt2GEoks5qc-2LgaJpZM4JeLOe .
I tried compiling demo_fashinoista per the instructions, and am getting the following:
octave: X11 DISPLAY environment variable not set octave: disabling GUI features error: package image is not installed error: called from load_packages at line 53 column 11 pkg at line 422 column 7 demo_fashionista.m at line 1 column 1
I think learning is difficult and spends much time. Are you trying to show the human image with estimated joints?
Yes, thats the goal, that would be incredible of my project. I just downloaded the person models here to test: https://github.com/wg-perception/PartsBasedDetectorModels. I ran the PersonINRIA model with your code and got results! so it works! (I just don't know what the results are because I do not have X11 installed on my remote linux machine).
I would love to talk to you further about getting this working if you would like you can contact me directly. Do you think this is possible to get the estimated joints from an image? You can contact me directly if this is easier.
On Sun, Aug 7, 2016 at 9:01 PM, Ishita Takeshi notifications@github.com wrote:
I think learning is difficult and spends much time. Are you trying to show the human image with estimated joints?
— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/IshitaTakeshi/PartsBasedDetector/issues/1#issuecomment-238121039, or mute the thread https://github.com/notifications/unsubscribe-auth/ABoV9CD-1HOts043x1X2v6-6qDRBCdqMks5qdoAEgaJpZM4JeLOe .
Talking on this page is enough for me.
Then I think you should use the original repository, not this one. The original one can show results with images. Could you try running the original one on an environment with X11 installed?
This repository is for the Julia interface. This contains a visualizer but the code for showing results has removed. If you install Julia you can show the results more easily. See README of the Julia interface.
Are you suggesting use wg-perception library? I get errors during compilation.
MatlabIOModel.cpp:41:24: fatal error: MatlabIO.hpp: No such file or directory
On Sun, Aug 7, 2016 at 9:52 PM, Ishita Takeshi notifications@github.com wrote:
This repository is for the Julia interface https://github.com/IshitaTakeshi/PartsBasedDetector.jl. This contains a visualizer but the code for showing results has removed. If you install Julia you can try more easily. See README of the Julia interface https://github.com/IshitaTakeshi/PartsBasedDetector.jl/blob/master/README.md .
— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/IshitaTakeshi/PartsBasedDetector/issues/1#issuecomment-238125184, or mute the thread https://github.com/notifications/unsubscribe-auth/ABoV9AWbe8YvFX6iQFb4NCwAaXjhEkuIks5qdovogaJpZM4JeLOe .
You got the results but you don't have X11 so the results cannot be shown as images. Then what do you hope to do? You just want to show the images with joints are plotted?
Yes i got the results from running the model against an image with your library. I wanted to visualize so that I could test. I just need to know what the values mean so that I can visualize some other way.
The goal is to take the point information and analyze that data. If I understand that the results mean I can do it :-)
On Sunday, August 7, 2016, Ishita Takeshi notifications@github.com wrote:
You got the results but you don't have X11 so the results cannot be shown as images. Then what do you hope to do? You just want to show the images with joints are plotted?
— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/IshitaTakeshi/PartsBasedDetector/issues/1#issuecomment-238128957, or mute the thread https://github.com/notifications/unsubscribe-auth/ABoV9MQHfPOsuETu-VKC71WHu8yrNvTTks5qdpUJgaJpZM4JeLOe .
John Giatropoulos John.giatropoulos@gmail.com
Here are some results that I received:
index: 208 x: 1281416680 y: 32605 confidence: 0.0000
index: 209 x: 1281416680 y: 32605 confidence: 0.0000
index: 210 x: 1281416696 y: 32605 confidence: 0.0000
index: 211 x: 1281416696 y: 32605 confidence: 0.0000
index: 212 x: 1281416712 y: 32605 confidence: 0.0000
index: 213 x: 1281416712 y: 32605 confidence: 0.0000
index: 214 x: 1281416728 y: 32605 confidence: 0.0000
index: 215 x: 1281416728 y: 32605 confidence: 0.0000
index: 216 x: 0 y: 0 confidence: 0.0000
index: 217 x: 0 y: 0 confidence: 0.0000
index: 218 x: 38375216 y: 0 confidence: 0.0000
index: 219 x: 38375216 y: 0 confidence: 0.0000
index: 220 x: 0 y: 0 confidence: 0.0000
index: 221 x: 4008822210 y: 4261150191 confidence: 0.0000
index: 222 x: 37797104 y: 0 confidence: 0.0000
index: 223 x: 37797104 y: 0 confidence: 0.0000
index: 224 x: 1281416808 y: 32605 confidence: 0.0000
index: 225 x: 1281416808 y: 32605 confidence: 0.0000
index: 226 x: 1281416824 y: 32605 confidence: 0.0000
index: 227 x: 1281416824 y: 32605 confidence: 0.0000
index: 228 x: 96 y: 0 confidence: 0.0000
index: 229 x: 96 y: 0 confidence: 0.0000
index: 230 x: 0 y: 2848285866 confidence: 0.0000
index: 231 x: 0 y: 2848285866 confidence: 0.0000
index: 232 x: 38055136 y: 0 confidence: 0.0000
index: 233 x: 38055136 y: 0 confidence: 0.0000
index: 234 x: 1281416888 y: 32605 confidence: 0.0000
index: 235 x: 1281416888 y: 32605 confidence: 0.0000
index: 236 x: 1281416904 y: 32605 confidence: 0.0000
index: 237 x: 1281416904 y: 32605 confidence: 0.0000
index: 238 x: 320 y: 0 confidence: 0.0000
index: 239 x: 320 y: 0 confidence: 0.0000
index: 240 x: 0 y: 1108410234 confidence: 0.0000
index: 241 x: 96 y: 0 confidence: 0.0000
index: 242 x: 38407520 y: 0 confidence: 0.0000
index: 243 x: 38407520 y: 0 confidence: 0.0000
index: 244 x: 39007568 y: 0 confidence: 0.0000
index: 245 x: 39007568 y: 0 confidence: 0.0000
index: 246 x: 37894992 y: 0 confidence: 0.0000
index: 247 x: 37894992 y: 0 confidence: 0.0000
index: 248 x: 38132320 y: 0 confidence: 0.0000
index: 249 x: 38132320 y: 0 confidence: 0.0000
index: 250 x: 38504288 y: 0 confidence: 0.0000
index: 251 x: 38504288 y: 0 confidence: 0.0000
index: 252 x: 1281417032 y: 32605 confidence: 0.0000
index: 253 x: 1281417032 y: 32605 confidence: 0.0000
On Sun, Aug 7, 2016 at 10:33 PM, John Giatropoulos < john.giatropoulos@gmail.com> wrote:
Yes i got the results from running the model against an image with your library. I wanted to visualize so that I could test. I just need to know what the values mean so that I can visualize some other way.
The goal is to take the point information and analyze that data. If I understand that the results mean I can do it :-)
On Sunday, August 7, 2016, Ishita Takeshi notifications@github.com wrote:
You got the results but you don't have X11 so the results cannot be shown as images. Then what do you hope to do? You just want to show the images with joints are plotted?
— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/IshitaTakeshi/PartsBasedDetector/issues/1#issuecomment-238128957, or mute the thread https://github.com/notifications/unsubscribe-auth/ABoV9MQHfPOsuETu-VKC71WHu8yrNvTTks5qdpUJgaJpZM4JeLOe .
John Giatropoulos John.giatropoulos@gmail.com
x and y should be the coordinates of joints but it seems that invalid values are obtained. Here is mine.
$./build/src/PartsBasedDetector1 models/mat/lsp.mat clothing-co-parsing/photos/1094.jpg
Convolution time: 0.466193
DP min time: 0.156933
non-maxima suppression time: 0.000000
DP argmin time: 0.007650
index: 0 x: 211 y: 127 confidence: 0.3647
index: 1 x: 169 y: 338 confidence: 0.0000
index: 2 x: 211 y: 464 confidence: 0.0000
index: 3 x: 253 y: 633 confidence: 0.0000
index: 4 x: 211 y: 338 confidence: 0.0000
index: 5 x: 211 y: 464 confidence: 0.0000
index: 6 x: 253 y: 633 confidence: 0.0000
index: 7 x: 169 y: 169 confidence: 0.0000
index: 8 x: 84 y: 253 confidence: 0.0000
index: 9 x: 42 y: 253 confidence: 0.0000
index: 10 x: 253 y: 127 confidence: 0.0000
index: 11 x: 338 y: 127 confidence: 0.0000
index: 12 x: 380 y: 84 confidence: 0.0000
index: 13 x: 253 y: 84 confidence: 0.0000
I used an image in clothing-co-parsing dataset.
Yes yours are fantastic! Do you have a model file available in your repository? I used a model from the other lib because training failed.
On Sunday, August 7, 2016, Ishita Takeshi notifications@github.com wrote:
x and y should be the coordinates of joints but it seems that invalid values are obtained. Here is mine.
Convolution time: 0.466193 DP min time: 0.156933 non-maxima suppression time: 0.000000 DP argmin time: 0.007650 index: 0 x: 211 y: 127 confidence: 0.3647 index: 1 x: 169 y: 338 confidence: 0.0000 index: 2 x: 211 y: 464 confidence: 0.0000 index: 3 x: 253 y: 633 confidence: 0.0000 index: 4 x: 211 y: 338 confidence: 0.0000 index: 5 x: 211 y: 464 confidence: 0.0000 index: 6 x: 253 y: 633 confidence: 0.0000 index: 7 x: 169 y: 169 confidence: 0.0000 index: 8 x: 84 y: 253 confidence: 0.0000 index: 9 x: 42 y: 253 confidence: 0.0000 index: 10 x: 253 y: 127 confidence: 0.0000 index: 11 x: 338 y: 127 confidence: 0.0000 index: 12 x: 380 y: 84 confidence: 0.0000 index: 13 x: 253 y: 84 confidence: 0.0000
— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/IshitaTakeshi/PartsBasedDetector/issues/1#issuecomment-238130056, or mute the thread https://github.com/notifications/unsubscribe-auth/ABoV9PRUOVa_D3RreRS_d7lLWT2PeURqks5qdpd5gaJpZM4JeLOe .
John Giatropoulos John.giatropoulos@gmail.com
Thanks for the model :-) When I run it I am getting the following:
Segmentation fault (core dumped)
When I run it I am typing
./src/PartsBasedDetector1 /path/to/lsp.mat /path/to/my/image.png
Is this correct? When I run against PartsBasedDetector2 I get .mat file not supported...
On Sun, Aug 7, 2016 at 10:45 PM, Ishita Takeshi notifications@github.com wrote:
Use this one https://github.com/IshitaTakeshi/PartsBasedDetector/blob/ master/models/mat/lsp.mat
— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/IshitaTakeshi/PartsBasedDetector/issues/1#issuecomment-238130369, or mute the thread https://github.com/notifications/unsubscribe-auth/ABoV9IvftRQOovneg8Q3fUvb0Zlu-a_mks5qdphFgaJpZM4JeLOe .
PartsBasedDetector2 is not supporting .mat file. It is correct.
Could you try with this image? https://github.com/bearpaw/clothing-co-parsing/blob/master/photos/1094.jpg
Ahhhh, very interesting! With your testing image I get the following result: So does this mean the image format is important? I can have the images in a different format thats no trouble...just curious also about dimensions, etc...this is very exciting!
Convolution time: 2.299483
DP min time: 0.595534
non-maxima suppression time: 0.000001
DP argmin time: 0.022076
index: 0 x: 211 y: 127 confidence: 0.3647
index: 1 x: 169 y: 338 confidence: 0.0000
index: 2 x: 211 y: 464 confidence: 0.0000
index: 3 x: 253 y: 633 confidence: 0.0000
index: 4 x: 211 y: 338 confidence: 0.0000
index: 5 x: 211 y: 464 confidence: 0.0000
index: 6 x: 253 y: 633 confidence: 0.0000
index: 7 x: 169 y: 169 confidence: 0.0000
index: 8 x: 84 y: 253 confidence: 0.0000
index: 9 x: 42 y: 253 confidence: 0.0000
index: 10 x: 253 y: 127 confidence: 0.0000
index: 11 x: 338 y: 127 confidence: 0.0000
index: 12 x: 380 y: 84 confidence: 0.0000
index: 13 x: 253 y: 84 confidence: 0.0000
On Mon, Aug 8, 2016 at 12:24 AM, Ishita Takeshi notifications@github.com wrote:
PartsBasedDetector2 is not supporting .mat file. It is correct.
Could you try with this image? https://github.com/bearpaw/ clothing-co-parsing/blob/master/photos/1094.jpg
— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/IshitaTakeshi/PartsBasedDetector/issues/1#issuecomment-238139273, or mute the thread https://github.com/notifications/unsubscribe-auth/ABoV9AAAXIhM17Nm9wjNU6nXmSNkXiiZks5qdq-RgaJpZM4JeLOe .
I think so
:+1:
Man this is awesome, Im plotting the points in photoshop to test until I setup the database, and web UI for testing...Thanks so much and Ill keep you posted on the progress!
On Mon, Aug 8, 2016 at 12:35 AM, Ishita Takeshi notifications@github.com wrote:
👍
— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/IshitaTakeshi/PartsBasedDetector/issues/1#issuecomment-238140114, or mute the thread https://github.com/notifications/unsubscribe-auth/ABoV9KOdpZN3R5F7yKcXn6HhmMBBABbkks5qdrH6gaJpZM4JeLOe .
do the coordinates have origin bottom left? Im getting interesting results, and trying to plot from both bottom left and top left to see which is closer.
On the demo image you sent are you getting the joints fairly accurately where they should be on the image? When I plot the points they are giving strange results. Just curious what it looks like for you on the image.
I think it is top left.
Ok, then yes, Im getting some very bad results. In the demo image you sent the plotted points are not very near the joints. Im curious about your results. Im also trying to use this model (Im not sure how to use it with your library).
http://www.comp.leeds.ac.uk/mat4saj/lsp.html
Thanks for all of your help!
Have you had decent results with that model? If these are the best results possible, I may have to figure out how to train my own models. They were way off for detecting the people in my images. So glad your code is running for me, just need to figure out why the results are off.
The image is from chictopia.com.
I gained few decent results with that model actually. As you know Human Pose Estimation is a very difficult problem. I cannot explain why results are not suitable since I'm neither a researcher nor a developer.
Oh wait so the x and y are in top left and bottom right respectively?! I'll have to redo my results!! Maybe it works, I was using bottom left as 0,0...this may be the answer!
On Thursday, August 11, 2016, Ishita Takeshi notifications@github.com wrote:
[image: figure_1] https://cloud.githubusercontent.com/assets/1955629/17596000/6ca93c5e-602a-11e6-80d6-e0b023bb05b9.png
The image is from chictopia.com http://images0.chictopia.com/photos/districtofchic/9466442894/black-skinny-jeans-dl1961-jeans-sky-blue-pixie-market-shirt_400.jpg .
I gained few decent results with that model actually. As you know Human Pose Estimation is a very difficult problem. I cannot explain why results are not suitable since I'm neither a researcher nor a developer.
— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/IshitaTakeshi/PartsBasedDetector/issues/1#issuecomment-239213446, or mute the thread https://github.com/notifications/unsubscribe-auth/ABoV9LjNi7RtOvKgwnRDBq5urWC_GBOFks5qe0y-gaJpZM4JeLOe .
John Giatropoulos John.giatropoulos@gmail.com
Sorry, I thought the origin is top left. I just noticed my mistake by your comment.
HOGFeatures.cpp:121:29: error: invalid operands to binary expression ('Size' and 'double')
resize(im, scaled, imsize * (1.0f/pow(sfactor ,(int)i)));