jorensorilla / sparsehash

Automatically exported from code.google.com/p/sparsehash
0 stars 0 forks source link

g++ 4.4.2 goes into infinite loop compiling hashtable_test.cc #75

Closed GoogleCodeExporter closed 9 years ago

GoogleCodeExporter commented 9 years ago
What steps will reproduce the problem?
1. configure --prefix=`echo ~`
2. make      (crashes)
3. g++ -DHAVE_CONFIG_H -I. -I. -I./src  -I./src  -Wall -W -Wwrite-strings 
-Woverloaded-virtual -Wshadow -g -O2 -MT hashtable_test.o -MD -MP -MF 
".deps/hashtable_test.Tpo" -c -o hashtable_test.o src/hashtable_test.cc

What is the expected output? What do you see instead?
Expect normal compilation.  Get infinite loop, cpu load rising, hard to stop 
with ctrl-c.  It's possible compilation would have finished, but I didn't want 
to wait more than 5 minutes to compile a 1700-line c++ source file.

What version of the product are you using? On what operating system?
sparsehash-1.11 , on CentOS release 5.6 (Final).

Please provide any additional information below.
I'm compiling on a supercomputer cluster (www.clumeq.ca), #55 on top500.org.

Original issue reported on code.google.com by pkts...@gmail.com on 1 Sep 2011 at 11:36

GoogleCodeExporter commented 9 years ago
Yes, hashtable_test is a bear to compile.  If you turn off optimization 
(./configure CXXFLAGS=-g), it should compile easier.  That's hopefully enough 
for you to verify everything is working correctly; you should still be able to 
use -O2 for your own code.

It's probably possible to split this file up, but I think the -O0 workaround is 
straightforward enough that it's the better way to go.

Original comment by csilv...@gmail.com on 1 Sep 2011 at 11:52

GoogleCodeExporter commented 9 years ago
For curiosity's sake, I ran it through 'delta' (http://delta.tigris.org/ ), to 
try and find the minimal source file that still takes forever to compile.  It's 
347 lines long, and attached.  Delete any lines, and the g++ compile goes 
quickly.  The test script is:

#!/bin/bash
FILE=$1
# stop after 5 cpu seconds with  "g++: Internal error: Killed (program cc1plus)"
ulimit -t 5 
g++ -I/home/chowes/sparsehash-1.11/src -O2 -c -o hashtable_test.o $FILE 2>&1 | \
  grep "Internal error: Killed" && exit 0
exit 1

I actually used delta to find the minimal command line arguments that 
reproduced the problem as well; I guess I'm a delta junkie.  :-)  

One thing you could do is change the makefile to put the '-O0' option in there 
for problematic versions of g++; perhaps a 'ulimit -t 5' as well?

Original comment by pkts...@gmail.com on 2 Sep 2011 at 6:36

Attachments:

GoogleCodeExporter commented 9 years ago
You're right, there are things I can do -- I guess I closed this bug too early. 
 An easy one would be to just do -O0 all the time (unless the user overrides 
it).  I'll do that for the next release.

I just learned about delta last week, actually!  It was fun to see it actually 
at work.

Original comment by csilv...@gmail.com on 2 Sep 2011 at 6:06

GoogleCodeExporter commented 9 years ago
It turns out that I could have done a better job running delta, since I didn't 
move any code around to make matching pairs of brackets fit on one line, or 
alternatively put newlines at every possible break point to see what atoms can 
be pared away.  I can't spend any more time on it, but good luck!

Original comment by pkts...@gmail.com on 2 Sep 2011 at 11:23

GoogleCodeExporter commented 9 years ago
I can't reproduce your results on my own machine (running gcc 4.4.3) -- the 
attached minimal file takes only 7 seconds to compile, vs almost 2 minutes for 
the full file.  I think the right solution is to just use -O0 for this.

Unfortunately, automake doesn't support forcing -O0 for this test (it wants to 
let the user decide).  I'll see if there's a practical way to work around that.

Original comment by csilv...@gmail.com on 8 Sep 2011 at 9:09

GoogleCodeExporter commented 9 years ago
I couldn't find one, so I just documented it instead (a poor man's fix, I 
know), in the INSTALL file.  I'm counting this as "fixed" since I don't know if 
there's a better way to fix it that's not really complicated.  Even if we could 
figure out the particular issue that causes the test to run slowly on your 
machine, there's no guarantee that a future test addition won't break things 
again.  And it occurs to me people may want to test the hashtable with -O2, so 
always enforcing -O0 isn't great in any case.

Original comment by csilv...@gmail.com on 24 Oct 2011 at 10:42