JuliaGPU / GPUCompiler.jl

Reusable compiler infrastructure for Julia GPU backends.
Other
158 stars 51 forks source link

1.11 error AssertionError: Base.isdispatchtuple #635

Open wsmoses opened 3 weeks ago

wsmoses commented 3 weeks ago
(f, tt, world, typeof(f)) = (Enzyme.Compiler.add_one_in_place, Tuple{Any}, 0x000000000000683a, typeof(Enzyme.Compiler.add_one_in_place))

  Stacktrace:
    [1] methodinstance
      @ ~/.julia/packages/GPUCompiler/2CW9L/src/jlgen.jl:82 [inlined]

 function nested_codegen!(mode::API.CDerivativeMode, mod::LLVM.Module, f, tt, world)
+    @show f, tt, world, typeof(f)
     funcspec = GPUCompiler.methodinstance(typeof(f), tt, world)
     nested_codegen!(mode, mod, funcspec, world)
 end

So essentially here we request a generic form of the function add_one_in_place requesting it for an any input. This works on 1.10, but fails on 1.11 hitting the aforementioned assertion error

maleadt commented 3 weeks ago
  isdispatchtuple(T)

  Determine whether type T is a tuple "leaf type", meaning it could appear as a type signature in dispatch and has
  no subtypes (or supertypes) which could appear in a call.

Tuple{Any} is not a valid type to request code generation for. This is also not supported by Base; see the warning emitted by code_llvm:

julia> code_llvm(identity, Tuple{Any})
; WARNING: This code may not match what actually runs.
;  @ operators.jl:522 within `identity`
define nonnull {}* @julia_identity_128({}* noundef nonnull readonly %0) #0 {
top:
  ret {}* %0
}
wsmoses commented 1 week ago

Worked around in a different way

wsmoses commented 4 days ago

Okay I take it back, this remains blocking for us in custom rules in 1.11.

@gbaraldi you had mentioned some ideas working around this?

cc @ChrisRackauckas

@maleadt even with that warning as is, can we somehow still work around this and get the equivalent of that codellvm at the bottom?