Closed maleadt closed 3 months ago
All modified and coverable lines are covered by tests :white_check_mark:
Project coverage is 93.65%. Comparing base (
3d7097a
) to head (79a98f2
).
:umbrella: View full report in Codecov by Sentry.
:loudspeaker: Have feedback on the report? Share it here.
It looks like the failure was due to an aqua ambiguity, not inference failure, can't we fix that? cc @maleadt
The aqua failure is unrelated. It's the inference failures that are problematic.
Ah, I didn't see that. Sheesh Internal error: stack overflow in type inference of _adapt_tuple_structure(CUDA.KernelAdaptor, NTuple{7708, UInt64})
. That seems awfully large, is that correct?
If so, is there a middle point that we could settle on? Maybe we can specialize on small tuples?
Yeah, those large tuples are used to test for parameter space exhaustion: https://github.com/JuliaGPU/CUDA.jl/blob/cb14a637e0b7b7be9ae01005ea9bdcf79b320189/test/core/execution.jl#L622-L625
In any case, it would be good to add a limit based on the length
of the tuple. Anything that's significantly large should probably fall back to the current implementation? Or maybe use ntuple
(why doesn't that suffice in the first place to avoid an inference problem?).
Reverts JuliaGPU/Adapt.jl#78
Looks like this introduces inference crashes during CUDA.jl CI: from https://buildkite.com/julialang/gpuarrays-dot-jl/builds/814#018e13b6-4936-4fb9-8d88-4402694019e6
cc @charleskawczynski