denizyuret / Knet.jl

Koç University deep learning framework.
https://denizyuret.github.io/Knet.jl/latest
Other
1.43k stars 230 forks source link

DCGAN.main("") fails #443

Open montyvesselinov opened 5 years ago

montyvesselinov commented 5 years ago
ulia> include("dcgan.jl")
WARNING: replacing module DCGAN.
Main.DCGAN

julia> DCGAN.main("")
[ Info: Loading MNIST...
training started...
ERROR: MethodError: no method matching batchnorm4(::Array{Float32,4}, ::Array{Float32,4}, ::Array{Float64,4}; moments=Knet.BNMoments(0.1, [0.0]

[0.0]

[0.0]

...

[0.0]

[0.0]

[0.0], [1.0]

[1.0]

[1.0]

...

[1.0]

[1.0]

[1.0], zeros, ones), training=true, cache=Knet.BNCache(nothing, nothing, nothing, nothing, nothing))
Closest candidates are:
  batchnorm4(::Array{T,N} where N, ::Array{T,N} where N, ::Array{T,N} where N; o...) where T at /Users/monty/.julia/packages/Knet/05UDD/src/batchnorm.jl:364
  batchnorm4(::##985, ::##986, ::AutoGrad.Value{##987}; o...) where {##985, ##986, ##987} at none:0
  batchnorm4(::##985, ::AutoGrad.Value{##986}, ::##987; o...) where {##985, ##986, ##987} at none:0
  ...
Stacktrace:
 [1] #batchnorm2#567(::Knet.BNMoments, ::Bool, ::Base.Iterators.Pairs{Symbol,Knet.BNCache,Tuple{Symbol},NamedTuple{(:cache,),Tuple{Knet.BNCache}}}, ::Function, ::Array{Float32,2}, ::Array{Float32,2}, ::Array{Float64,2}) at /Users/monty/.julia/packages/Knet/05UDD/src/batchnorm.jl:496
 [2] (::getfield(Knet, Symbol("#kw##batchnorm2")))(::NamedTuple{(:moments, :training, :cache),Tuple{Knet.BNMoments,Bool,Knet.BNCache}}, ::typeof(Knet.batchnorm2), ::Array{Float32,2}, ::Array{Float32,2}, ::Array{Float64,2}) at ./none:0
 [3] #batchnorm#435(::Bool, ::Base.Iterators.Pairs{Union{},Union{},Tuple{},NamedTuple{(),Tuple{}}}, ::Function, ::Array{Float64,2}, ::Knet.BNMoments, ::Array{Float32,1}) at /Users/monty/.julia/packages/Knet/05UDD/src/batchnorm.jl:90
 [4] (::getfield(Knet, Symbol("#kw##batchnorm")))(::NamedTuple{(:training,),Tuple{Bool}}, ::typeof(Knet.batchnorm), ::Array{Float64,2}, ::Knet.BNMoments, ::Array{Float32,1}) at ./none:0
 [5] #glayer1#25(::Bool, ::Function, ::Array{Float64,2}, ::Array{Any,1}, ::Knet.BNMoments) at /Users/monty/Julia/rMF/GAN/dcgan.jl:255
 [6] (::getfield(Main.DCGAN, Symbol("#kw##glayer1")))(::NamedTuple{(:training,),Tuple{Bool}}, ::typeof(Main.DCGAN.glayer1), ::Array{Float64,2}, ::Array{Any,1}, ::Knet.BNMoments) at ./none:0
 [7] #gnet#24(::Bool, ::Function, ::Array{Any,1}, ::Array{Float64,2}, ::Array{Any,1}) at /Users/monty/Julia/rMF/GAN/dcgan.jl:244
 [8] #gnet at ./none:0 [inlined]
 [9] train_discriminator!(::Array{Any,1}, ::Array{Any,1}, ::Array{Any,1}, ::Array{Any,1}, ::Array{Float32,4}, ::Array{UInt8,1}, ::Array{Float64,2}, ::Array{Knet.Adam,1}, ::Dict{Symbol,Any}) at /Users/monty/Julia/rMF/GAN/dcgan.jl:201
 [10] macro expansion at /Users/monty/Julia/rMF/GAN/dcgan.jl:41 [inlined]
 [11] macro expansion at ./util.jl:156 [inlined]
 [12] main(::String) at /Users/monty/Julia/rMF/GAN/dcgan.jl:39
 [13] top-level scope at none:0
iuliancioarca commented 5 years ago

I had the same issue. The problem was in 'sample_noise' function. It received a Float32 array, but returned Float64. Fixed it by: `function sample_noise(atype,zdim,nsamples,mu=0.5,sigma=0.5)

noise = convert(atype, randn(zdim,nsamples))

noise = randn(zdim,nsamples)
normalized = convert(atype, (noise .- mu) ./ sigma)

end`

After that I got some conv4(...) similar error regarding Float32 and Float64, and that was because there are a lot of functions with a default 'alpha = 0.2' parameter which is Float64. Forced all those to: alpha=Float32(0.2) and now it works.

Please keep in mind that my code was tested on CPU(not GPU with KnetArrays) and it is more or less a workaround because all these forced conversions should be made to accomodate whichever 'atype' the user chooses(and module supports of course).