Closed songminglong closed 9 years ago
Hi Xiao-Hu, Thanks for getting back to me and reporting this issue! I was able to reproduce your problem. I will have a look and update this issue.
Happy day :-)
I have remove multi-thread and that works ok,no memory problems.So I think your code may has some problems in ThreadPool.h,you have better check it,happy day
Thank you very much for the information, that's very useful to know.
It's good to have multi-threading in the feature extracion, as that part can be quite slow. I'll try to find a solution!
Hi Patrik,
I also receive insufficient memory exceptions from OpenCV when training a model with just the first 100 images (and corresponding annotations) of the lfpw dataset. I am using the landmark_detection sample application. I have 16GB of RAM and 10 GB are consumed by the application causing my system to run out of memory.
Any thoughts?
Just to clarify, the mean shape file format should be all x coordinates followed by all y coordinates and should be comma-delimited.
Thanks,
Steven
Hi Patrik, Have you resolved the insufficient memory exception problems?
@scadavid Hi Steven, sorry, I kind of overlooked your post! The number of images is not relevant to the memory consumption I think - it's mostly the number of landmarks used. As the example uses 68 landmarks, you'll naturally run out of memory - I definitely need to change the sample app.
If you're interested in training a model, have a look at the RCR in #3, and then use rcr-train.cpp as a starting point. Tell me if you want to train something - I can then send you the required configs and a more detailed explanation - they're not on Github yet.
If you just want to run the pretrained model, #3 should explain it all.
And yes, you're correct regarding the format of the mean shape.
@shangguanxiaohu: Same for you: Not yet, but we should be fine training a model with 20 to 40 landmarks. Leave me a note here and I'll upload the guide.
Hi Patrik,
Thank you for your efforts on this!
Yes, I would like to train my own model asap. When you can, please send me the required configs and a more detailed explanation. I will take a look at RCR in #3 as you indicate.
Thanks again.
On Jun 3, 2015, at 2:24 PM, Patrik Huber notifications@github.com wrote:
@scadavid Sorry, I kind of overlooked your post! The number of images is not relevant to the memory consumption I think - it's mostly the number of landmarks used. As the example uses 68 landmarks, you'll naturally run out of memory - I definitely need to change the sample app.
If you're interested in training a model, have a look at the RCR in #3, and then use rcr-train.cpp as a starting point. Tell me if you want to train something - I can then send you the required configs and a more detailed explanation - they're not on Github yet.
If you just want to run the pretrained model, #3 should explain it all.
And yes, you're correct regarding the format of the mean shape.
@shangguanxiaohu: Same for you: Not yet, but we should be fine training a model with 20 to 40 landmarks. Leave me a note here and I'll upload the guide.
— Reply to this email directly or view it on GitHub.
Steven,
You're welcome, I'm glad to see your interest in our work. I've just added the configs to the devel
branch and I've set up a wiki page here explaining the training process. I'm a bit busy until the 15th, but then I'll have time to look at the two larger issues mentioned in the wiki - if you train a model with less than 40-45 landmarks, it'll already work fine now.
Let me know how it goes. If you have questions or any issues with it, do not hesitate to open a new issue, let's keep this one for the memory issue.
I figured out the memory issue today. It's actually dependent on the number of images and perturbations. Even though we only copy around pointers, and not data, it turns out that 7500 pointers copied to 7500 tasks, each pointer being 96 bytes, occupies over 5GB. I'll push a fix in the next few days.
@patrikhuber thanks so much, hope it will not crash again
I just fixed it in the devel
branch. I'm going to close this issue, if there's still problems with memory usage, we can always reopen it.
I have viewed your code, and found that there was no change in threadpool.h , so I guess you may change your code in features' extraction stage, isn't it?
Yep you're right. It wasn't a problem with the threadpool. std::async
had the same behaviour - after all, the state variables (in this case the images, or the pointers to them) of the feature extraction object have to be copied. I changed the feature extraction class to only hold a reference to the images now. (there might be other, better solutions, but I'm happy with this for now)
haha,I know it! thanks for your updation.by the way, I wanna to note that "vector
Hi huber, Thanks for reply!I have viewed your updating codes. Actually I have solved my problems by rewriting codes in many places, please forgive my behaviors. Besides, I also encountered new problems: I use helen database with 2000 labeled pictures on iBug website, and scale 15 times and rotate 5 times , totally about 30000 training images. I train it on 64G rams server. Unfortunately it consumes all rams, that is , it needs very large memory in training step. Can you optimize this problem? please forgive me my pool english, happy everyday!