Closed Quuxplusone closed 16 years ago
This is known to be due to a miscompilation of llvm-gcc by itself
(deduced from the fact it only happens when bootstrapping).
Known bad: LLVM (r54160) + llvm-gcc HEAD (r54312).
Known good: LLVM HEAD (r54312) + llvm-gcc (r54160+54240+54246).
This exact problem showed up recently. I thought that Mon Ping had a fix for it. At least it was fixed on Darwin...
Mon Ping observed that llvm-gcc is miscompiling itself,
and found a way of tweaking the miscompiled code so it
is not miscompiled. That just papers over the problem,
which is that llvm-gcc is miscompiling stuff, so I asked
him to revert his tweak, which he did (I was afraid the
issue would be forgotten otherwise). People can still
build a working llvm-gcc by not bootstrapping.
We need to understand whether the bug is in llvm-gcc or llvm proper? Does it build without the recent llvm-gcc changes merged from Apple gcc mainline?
What was the miscompiled code?
Take a look at revision 54267, which reverted the "avoid the
miscompile" tweak.
Okay. I'll take a look and see if I can come up with anything. I'm having
trouble building on the only Linux box I have access to. Maybe you can help?
Here's the error message:
In file included from /usr/include/features.h:354,
from /usr/include/stdio.h:28,
from ../../llvm-gcc.src/gcc/tsystem.h:90,
from ../../llvm-gcc.src/gcc/crtstuff.c:68:
/usr/include/gnu/stubs.h:7:27: error: gnu/stubs-32.h: No such file or directory
The system is 64-bit, AFAIK. I tried everything I knew to get llvm-gcc to build
64-bit, but to no avail.
Hi Bill,
Is it pure-64 or a mixed 64/32 system? I usually force 64-bitters to configure as 32-bit if they support that:
../src/configure --disable-shared --disable-multilib --enable-checking --enable-llvm=/usr/local/src/llvm --prefix=/usr/local/src/llvm-gcc/install --target=i686-pc-linux-gnu --host=i686-pc-linux-gnu --build=i686-pc-linux-gnu --enable-languages=c,c++
I don't know, but I think it's a pure-64 bit system. When I got it to compile for 64-bit (using --disable-multilib), I wasn't able to reproduce the error. I'll see if I can find another Linux machine to try it out on, but I'm not hopeful. If you or someone else can reduce this problem, it would help a lot.
As a sanity check, please try these two tags to see if they build for you on
Linux:
http://llvm.org/svn/llvm-project/llvm/tags/Apple/llvmCore-2060
http://llvm.org/svn/llvm-project/llvm-gcc-4.2/tags/Apple/llvmgcc42-2060
These were the last tags before the merges started going in.
Okay. I got it to fail with TOT. Here's a reduced testcase:
$ cat t.i
int bar(int *sem) {
return !__sync_bool_compare_and_swap (sem, 1, 0);
}
Okay. It looks as if c-common.c is being messed up in some way. The stage1-gcc/c-common.o (the one built by GCC) binary works but prev-gcc/c-common.o (the one built by the LLVM-GCC binary) fails. It passes if compiled at -O0, but fails at -O1 and above.
This transformation in instcombine is causing the problems.
{ // (icmp ugt/ult A, C) & (icmp B, C) --> (icmp (A|B), C)
// where C is a power of 2
Value *A, *B;
ConstantInt *C1, *C2;
ICmpInst::Predicate LHSCC, RHSCC;
if (match(&I, m_And(m_ICmp(LHSCC, m_Value(A), m_ConstantInt(C1)),
m_ICmp(RHSCC, m_Value(B), m_ConstantInt(C2)))))
if (C1 == C2 && LHSCC == RHSCC && C1->getValue().isPowerOf2() &&
(LHSCC == ICmpInst::ICMP_ULT || LHSCC == ICmpInst::ICMP_UGT)) {
Instruction *NewOr = BinaryOperator::CreateOr(A, B);
InsertNewInstBefore(NewOr, I);
return new ICmpInst(LHSCC, NewOr, C1);
}
}
I don't think that this is a valid transformation in all cases. Consider the
code that's failing:
%s1 = sub i8 %x, 6
%t1 = icmp ugt i8 %s1, 2
%s2 = sub i8 %x, 10
%t2 = icmp ugt i8 %s2, 2
...
%r1 = and i1 %t1, %t2
is transformed into this:
%s1 = sub i8 %x, 6
%t1 = icmp ugt i8 %s1, 2
%s2 = sub i8 %x, 10
%t2 = icmp ugt i8 %s2, 2
...
%r1 = or i8 %s1, %s2
... = icmp ugt i8 %r1, 2
Or, in human readable format:
BOTH x - 6 > 2 AND x - 10 > 2
is transformed into
EITHER x - 6 > 2 OR x - 10 > 2
There's a major semantic difference between the two.
To be more concrete. Make x == 11 in the equations. Without this
transformation, we get:
x - 6 = 5 > 2 is true
x - 10 = 1 > 2 is false
result true & false = false
With the transform:
x - 6 = 5
x - 10 = 1
5 | 1 = 5 > 2
result true
Reverted this patch:
http://lists.cs.uiuc.edu/pipermail/llvm-commits/Week-of-Mon-20080707/064740.html
Nick, Could you please revise your patch? I'm going to add the .bc files from c-common. The code that is broken in these files by your patch is in bb41 of @resolve_overloaded_builtin.
Attached c-common.bad.ll.gz
(324161 bytes, application/octet-stream): File after instcombine is run on it.
Attached c-common.good.ll.gz
(324048 bytes, application/octet-stream): File before instcombine is run on it.
Added testcase test/Transforms/InstCombine/2008-08-05-And.ll
Thanks for tracking this down! I think it works with a|b and u<, but I'll make sure I get it right next time.
What do people think of:
// (icmp ult A, C) & (icmp ult B, C) --> (icmp (A|B), C)
// (icmp ugt A, C) | (icmp ugt B, C) --> (icmp (A|B), C)
// where C is a power of 2
?
I'm quite sure it's right, and I'm running bootstrap + nightly to check double-
before committing.
(In reply to comment #21)
> What do people think of:
>
> // (icmp ult A, C) & (icmp ult B, C) --> (icmp (A|B), C)
Looks fine to me.
> // (icmp ugt A, C) | (icmp ugt B, C) --> (icmp (A|B), C)
Take A > 2 | B > 2, and suppose A = 2 and B = 1. The original statement is
false, but (A|B) > 2 is true.
(In reply to comment #21)
> // (icmp ult A, C) & (icmp ult B, C) --> (icmp (A|B), C)
This seems logically correct, at least. :-)
> // (icmp ugt A, C) | (icmp ugt B, C) --> (icmp (A|B), C)
Eli pointed out the flaw in this one.
Hah! Okay, I give up. I'm going to commit the ult part and forget about dealing with ugt.
c-common.bad.ll.gz
(324161 bytes, application/octet-stream)c-common.good.ll.gz
(324048 bytes, application/octet-stream)