Closed dcharkes closed 9 months ago
Repro's locally:
dart runtime/tools/dartfuzz/dartfuzz.dart --no-fp --no-ffi --flat --seed 1258834069 fuzz.dart
out/DebugX64/dart-sdk/bin/dart compile aot-snapshot fuzz.dart
I believe we are able to generate an indexing instruction that is out of the 32 bit bound.
cc @sstrickl
Just adding investigation notes here:
This should be coming from the following code in fuzz.dart
:
Int32x4List var25 = Int32x4List(22);
...
extension fooE4 on Int64List {
Int32x4? foo4_Extension2(Map<String, String>? par1, WeakReference<bool>? par2, int par3){
... var25[(-((-33 >>> 8)))] ...
}
}
and
-(-33 >>> 8) == -72057594037927935
The relevant code in the IL is:
....
v702 <- UnboxedConstant(#-72057594037927935) [-72057594037927935, -72057594037927935] int64
...
v816 <- Constant(#-1152921504606846960) [-1152921504606846960, -1152921504606846960] T{_Smi}
...
548: GenericCheckBound:14(v779 T{_Smi}, v702) [-4611686018427387904, 4611686018427387903] int64
550: v536 <- LoadIndexed(v258 T{_Int32x4List}, v816 T{_Smi}) int32x4
To give hex versions to make it clearer,
v702 = 0xff00 0000 0000 0001 == -(-33 >>> 8)
v816 = 0xf000 0000 0000 0010 == v702 << 4 == v702 * Int32x4List.elementSizeInBytes
Note that the GenericCheckBound
is guaranteed to fail, since v702
is a negative 64-bit value and thus, when compared using unsigned >=
to the length of var25
, which is guaranteed to be a positive value that fits in a Smi
, the comparison will be true and cause the control flow to go to the slow path for raising the appropriate error.
That is, all the code dominated by the GenericCheckBound
instruction, including the LoadIndexed
here, is dead code and could be removed.
The primary issue here is that Assembler::ElementAddressForIntIndex
doesn't have the ability to tell the caller 'Sorry, the generated displacement can't fit in a single Assembler::Address
value', which would then allow the caller to fix this by putting the offset in a register and then use the register version instead.
Well, that's not entirely true. We could change Assembler::ElementAddressForIntIndex
on X64 to see if the displacement doesn't fit, and if it doesn't and the array
register isn't TMP
, we could just put the offset into TMP
and then return the result of Address::ElementAddressForRegIndex
using TMP
as the index register. It's a little iffy because we're possibly generating an assembler instruction in Assembler::ElementAddressForIntIndex
and it's also the case that the TMP
register might have already been live across a LoadIndexed
instruction, but eh, that's unlikely and it's probably the simplest fix to get things working without causing too much code churn.
(Now I want to have a way to mark "assembler-internal temp register X
is currently live" (where X
is TMP
or TMP2
) in the Assembler
so that we can have checks that assembler (macro)instructions that use X
internally can throw appropriate errors if their expectation that X
is not currently live is broken.)
Actually, other architectures like ARM already have ElementAddressForIntIndex
do this, so nevermind, this is perfectly cromulent in our current style.
(Just means we have to change this from a static method to an instance method on X64, but the fact that it's an instance method on some architectures and a static method on others is already ick.)
After some more investigation, turns out that the actual issue is that CanBeImmediateIndex
, a local function used on all architectures to determine whether the index location for a LoadIndexed
or StoreIndexed
instruction should be a constant or a register, assumed that the scaling factor for the index could be calculated from the cid, but those instructions have an index_scale
field that can differ from the element size for the cid.
Thus the real fix is to make sure that CanBeImmediateIndex
calculates the same displacement as ElementAddressForIntIndex
by passing it the same field values from the instruction, so the instruction doesn't try and pass a constant location to EmitNativeCode
that can't actually fit in the immediate field of a single instruction.
LoadIndexedInstr::EmitNativeCode
on x64 can hit an assert.full log: https://logs.chromium.org/logs/dart/buildbucket/cr-buildbucket/8760042587664820481/+/u/collect_shards/make_a_fuzz_shard_12/task_stdout_stderr:_make_a_fuzz_shard_12