FluxML / Flux.jl

Relax! Flux is the ML library that doesn't make you tensor
https://fluxml.ai/
Other
4.42k stars 598 forks source link

ConvTranspose with padding on cpu throws exception #2465

Open DrChainsaw opened 1 week ago

DrChainsaw commented 1 week ago

Not something I need to do, but the NaiveNASflux tests hits this.

Haven't dug further into it, but I guess it just hits some obsolete codepath:

(jl_SVtHZz) pkg> status
Status `E:\Temp\systmp\jl_SVtHZz\Project.toml`
  [587475ba] Flux v0.14.16

julia> versioninfo()
Julia Version 1.10.4
Commit 48d4fd4843 (2024-06-04 10:41 UTC)
Build Info:
  Official https://julialang.org/ release
Platform Info:
  OS: Windows (x86_64-w64-mingw32)
  CPU: 12 × Intel(R) Core(TM) i7-5820K CPU @ 3.30GHz
  WORD_SIZE: 64
  LIBM: libopenlibm
  LLVM: libLLVM-15.0.7 (ORCJIT, haswell)
Threads: 1 default, 0 interactive, 1 GC (on 12 virtual cores)
Environment:
  JULIA_DEPOT_PATH = E:/Programs/julia/.julia
  JULIA_PKG_DEVDIR = E:/Programs/julia/.julia/dev

julia> using Flux
[ Info: Precompiling Flux [587475ba-b771-5e3f-ad9e-33799f191a9c]

julia> ConvTranspose((1,1), 1=>1)(ones(Float32, 1, 1, 1, 1))
1×1×1×1 Array{Float32, 4}:
[:, :, 1, 1] =
 -1.4218559

julia> ConvTranspose((1,1), 1=>1; pad=(1,1))(ones(Float32, 1, 1, 1, 1))
ERROR: MethodError: no method matching DenseConvDims(::Tuple{…}, ::NTuple{…}; stride::Tuple{…}, padding::Tuple{…}, dilation::Tuple{…}, groups::Int64)

Closest candidates are:
  DenseConvDims(::Tuple{Vararg{Int64, N}}, ::Tuple{Vararg{Int64, K}}, ::Int64, ::Int64, ::Int64, ::Tuple{Vararg{Int64, S}}, ::Tuple{Vararg{Int64, P}}, ::Tuple{Vararg{Int64, D}}, ::Bool) where {N, K, S, P, D} got unsupported keyword arguments "stride", "padding", "dilation", "groups"
   @ NNlib E:\Programs\julia\.julia\packages\NNlib\jLaeV\src\dim_helpers\DenseConvDims.jl:7
  DenseConvDims(::Tuple{Vararg{T, M}} where T, ::Tuple{Vararg{T, M}} where T; stride, padding, dilation, groups, flipkernel) where M
   @ NNlib E:\Programs\julia\.julia\packages\NNlib\jLaeV\src\dim_helpers\DenseConvDims.jl:20

Stacktrace:
 [1] conv_transpose_dims(c::ConvTranspose{2, 2, typeof(identity), Array{Float32, 4}, Vector{Float32}}, x::Array{Float32, 4})
   @ Flux E:\Programs\julia\.julia\packages\Flux\CUn7U\src\layers\conv.jl:323
 [2] (::ConvTranspose{2, 2, typeof(identity), Array{Float32, 4}, Vector{Float32}})(x::Array{Float32, 4})
   @ Flux E:\Programs\julia\.julia\packages\Flux\CUn7U\src\layers\conv.jl:336
 [3] top-level scope
   @ REPL[7]:1
Some type information was truncated. Use `show(err)` to see complete types.
paulnovo commented 3 days ago

Could this be the same issue as #2424 which is fixed by PR-2463?

If I understand how padding affects the ConvTranspose output size, your example will result in a negative output size, but the following should work with the changes in PR-2463:

julia> ConvTranspose((1,1), 1=>1; pad=(1,1))(ones(Float32, 3, 3, 1,1))
1×1×1×1 Array{Float32, 4}:
[:, :, 1, 1] =
 -1.3000358