Closed nhudinh2103 closed 6 years ago
Looks similar to #55. Try updating to latest version.
Cheers, Mathieu
Hi Mathieu,
Thanks for your quick response
I have pulled lastest version and built it, but it's still leak memory
Here is what I did:
1. Run find_object in console, 4 tcp threads (after run RAM = 10.1GB)
./find_object --images_not_saved --console --session ./test_incremental.bin --tcp_threads 4
2. Run tcp_request, repeat 10 times (after run 10 times RAM = 10.8GB and increase for each request)
./find_object-tcpRequest --json "out.json" --scene ./screen1.jpg --port 6000
I did try something similar by making a session of these 2 objects and setting General/port to 6000, then:
./find_object --images_not_saved --console --session /Users/mathieu/test.bin --tcp_threads 4
Running this script (calling 1000 times tcpRequest):
#!/bin/bash
# Basic while loop
counter=1
while [ $counter -le 1000 ]
do
echo $counter
./find_object-tcpRequest --json "out.json" --scene /Users/mathieu/Documents/workspace/find-object/bin/multi-scene.jpg --port 6000
((counter++))
done
echo All done
It started and finished at 12.5 MB, sometimes increasing to 15 MB or 20 MB but decreasing back to 12.5 MB. I tested on Mac OS X. You may try your test for a more long time and see if it keeps increasing. It is sure that allocating/deallocating memory sometimes doesn't restore back to 0 (depending on the OS), but it should somewhat stabilize at some point. Otherwise, if you can reproduce the problem on a smaller session (my laptop has only 8GB RAM) and if you can share it, it may be easier to reproduce the problem.
I just tried adding 200 more objects to the session. It started and finished at 40 MB too.
cheers, Mathieu
Hi Matthieu,
Here's some information about my environment: OS: Linux (CentOS 7) RAM: 16GB
My find_object config: (I use SIFT detector/descriptor)
https://drive.google.com/open?id=1Ayvdaa5lh4hrcXLvX0QHSFv7z53I6Hms
Here's dataset you can load object for test (Sorry I can't share my own dataset, so I just use this to test)
https://drive.google.com/open?id=1l4CxTHH-qblPhMP6ZqxQFGHYenzY5a4d
Screen
I load objects from dataset above and run tcp request.
./find_object-tcpRequest --json "out.json" --scene ./screen1.jpg --port 6000
Request 10 times: increasing 0.2 GB (7.3GB -> 7.5GB) Request 20 times (after run 10 times): increasing 0.3 GB (7.5GB -> 7.8GB)
I tried why your data and on Windows and MAC OS X, the memory used by find-object stays around 200 MB (before and after 20 calls to tcpRequest). On Ubuntu, it started at 200 MB, but increased up to 895 after 15 times, then remained between 895 and 905 after 15 more times. I am not sure if it is how Ubuntu is computing the memory usage that may look like sometimes they don't deallocate all memory. We may need to try with some memory analysis tools to verify that.
Hi matlabe, thanks for your test and response
I'm currently use heaptrack to detect memory leak (because valgrind too slow and not show which line of code).
Hi matlabbe,
I finally got the answer for this issues.
It's because some problem with default malloc (glibc) in Linux. A workarround to this is use tcmalloc library from google.
https://github.com/introlab/find-object/pull/62
If you have a better way, let me know it.
Dinh
I merged the pull request, but I updated it so that tcmalloc dependency is not required.
Hi matlabe,
When use find object with many object added (size >= 10000), tcpRequest will cause memory leak
Here's image:
1. Start find-object with existed session (object size = 10000)
2. After some request with find-object-tcpRequest
Things will get worse if I start you find object with tcpThreads = 4
I have find out this block of code cause memory leak. (If I comment code to stop Homorgraphy thread, find-object will stop leak).
FindObject.cpp:1587
I try to comment code in run() method in HomographyThread class, it still leak. So I think there may be some problem with thread.
FindObject.cpp:1196
Can you help me to solve it? Thanks