cuitao2046 / gperftools

Automatically exported from code.google.com/p/gperftools
BSD 3-Clause "New" or "Revised" License
0 stars 0 forks source link

Pprof doesn't open large "reco.xxxx._main_-end.heap" file #248

Closed GoogleCodeExporter closed 9 years ago

GoogleCodeExporter commented 9 years ago
Hi. I run my executable file with "HEAPCHECK=as-is
HEAP_CHECK_IGNORE_GLOBAL_LIVE=false ./app" options. And I have
reco.xxx._main_-end.heap file (2.1 GB size) as a result. My application is
using about 2 GB (operative memmory).
When I execute pprof command supposed after finish, it hangs and is killed
by system later.
What can I do to solve this problem?

I'm using google-perftools 1.5 on Scientific Linux 5.5 x64.

With best wishes, Konstantin

Original issue reported on code.google.com by K.Gertse...@gmail.com on 3 Jun 2010 at 1:23

GoogleCodeExporter commented 9 years ago
Oh my! -- I never expected anyone to work with heap files that big.  Are you on 
a 32-
bit system or a 64-bit system?  (More to the point, are you using a 32-bit perl 
or a 
64-bit perl?)  It may be that perl just can't deal with files larger than 2GB, 
if it 
uses a 32-bit signed int to hold file positions and the like.

Or it could just be that perl is making progress, but slowly.  When pprof is 
running 
and looks hanged, what happens when you attach to it with ptrace?

Original comment by csilv...@gmail.com on 3 Jun 2010 at 1:34

GoogleCodeExporter commented 9 years ago
I'm taking a part in serious project based on ROOT system that is why my macro 
is 
using 2 GB memory. Is it really needed 2GB file as output with HEAPCHECK.

I'm working with Scientific Linux 5.5 x64 - 64-bit system.
"perl --version
This is perl, v5.8.8 built for x86_64-linux-thread-multi".

I believe that "open" progress is very slowly. When pprof is running and locks 
hanged, my computer's working very very slowly that is why it's hard to watch 
ptrace 
command, but I'll try.

Original comment by K.Gertse...@gmail.com on 3 Jun 2010 at 1:54

GoogleCodeExporter commented 9 years ago
One thing you could try is to run your application under ptrace from the 
beginning, 
something like
   ptrace -f <myapp>

Eventually, it will only be running the pprof code, and you can see what it's 
doing.

Another possibility is just the profile is so big, that pprof ios using up all 
your 
memory, and you're just swapping.  Another thing you could try is to run 'top' 
while 
pprof is hanging, to see how much memory it's using.

Original comment by csilv...@gmail.com on 3 Jun 2010 at 2:02

GoogleCodeExporter commented 9 years ago
Sorry, I can't find ptrace command on my Scientific Linux 5.5 system.
I run top:
perl's overheads was growing up (during fifteen minutes) to 5700m VIRT and 1.8g 
RES
(i have 2 GB physical memory and 3.9 GB swap). Then process was killed. Can i 
correct it?

Original comment by K.Gertse...@gmail.com on 7 Jun 2010 at 11:39

GoogleCodeExporter commented 9 years ago
I'm sorry, I mistyped, the command is strace, not ptrace.

But it looks like the problem is just that perl is allocating a lot of memory.  
Do you 
know perl at all?  You could look through the source code and see where it's 
allocating memory for the heap profiler, and see if it really needs to have all 
that 
data in memory at once.  There may be places you can move over to a streaming 
model.  
But it's very possible that it needs all the data it has, and you'll just need 
more 
memory to handle such a huge heap profile.

Original comment by csilv...@gmail.com on 7 Jun 2010 at 5:41

GoogleCodeExporter commented 9 years ago
Did you ever have a chance to look into this?  If you're willing to attach the 
heap file (does google code accept such  big attachments?), and ideally the 
executable as well, I can try to take a look as well.  Or maybe you could put 
the datafile somewhere accessible via http.

Original comment by csilv...@gmail.com on 2 Aug 2010 at 4:46

GoogleCodeExporter commented 9 years ago
Sorry, I were on vacation. I'll try to share data via http next week.

Original comment by K.Gertse...@gmail.com on 3 Sep 2010 at 10:41

GoogleCodeExporter commented 9 years ago
Any more news on this?

Original comment by csilv...@gmail.com on 10 Jan 2011 at 3:22

GoogleCodeExporter commented 9 years ago
I'm afraid I'm going to have to close this CannotReproduce -- I don't even know 
how to make a profile file that big.  I'm sure there's a real issue here, with 
perl having a bottleneck with large heap-profile files, but I don't know what 
it is.

If you can attach the profile file, and perhaps the executable, feel free to 
reopen the bug.

Original comment by csilv...@gmail.com on 31 Aug 2011 at 11:41