Closed NeroBlackstone closed 2 weeks ago
If
dims
is specified,
AbstractVector{T}
and!isbitstype(T)
throws an error
Hi, I'm curious why isbitstype()
is needed here. For any 1D array, we can't specify dim
right? So why we need isbitstype()
, it would be nice if you could give an example with specific code. Thank you very much.
I have updated the implementation according to your guidance, except for the doubtful points above.
test failed on gpu, I can't reproduce on local since I don't have AMDGPU or CUDA devices...
I think it fails because Base.Iterators.Reverse{CuArray{Float32, 1, CUDA.Mem.DeviceBuffer}}
is not isbitstype
. CUDA doesn't like that. Even a minimal example like this fails for that reason:
g = CUDA.zeros(1)
gr = Iterators.reverse(g)
g .* gr
Using reverse
instead of Iterators.reverse
on arrays of type GPUArraysCore.AbstractGPUArray
should fix this.
Using
reverse
instead ofIterators.reverse
on arrays of typeGPUArraysCore.AbstractGPUArray
should fix this.
I got a PC with GPU and test locally.
I found Float64
works well but Int
throws errors. Weird...
Add this layer to the docs, also rebase with main. That will fix the doc build
All modified and coverable lines are covered by tests :white_check_mark:
Project coverage is 81.13%. Comparing base (
acd924f
) to head (dbeb848
).
:umbrella: View full report in Codecov by Sentry.
:loudspeaker: Have feedback on the report? Share it here.
I noticed some test failures, should I follow up on this PR? Or leave all the rest to you?
Github won't let me edit files directly. So if you add the docs to https://github.com/LuxDL/Lux.jl/blob/main/docs/src/api/Lux/layers.md#misc-helper-layers the build will pass
I don't know if this is the correct implementation