(new model) Add NeuralNetworkBinaryClasssifier, an optimised form of NeuralNetworkClassifier for the special case of two target classes. Use Flux.σ instead of softmax for the default finaliser (#248)
(internals) Switch from implicit to explicit differentiation (#251)
(breaking) Use optimisers from Optimisers.jl instead of Flux.jl (#251). Note that the new optimisers are immutable.
(RNG changes.) Change the default value of the model field rng fromRandom.GLOBAL_RNG to Random.default_rng(). Change the seeded RNG, obtained by specifying an integer value for rng, from MersenneTwister to Xoshiro (#251)
(RNG changes.) Update the Short builder so that the rng argument of build(::Short, rng, ...)
is passed on to the Dropout layer, as these layers now support this on a GPU, at
least for rng=Random.default_rng() (#251)
(weakly breaking) Change the implementation of L1/L2 regularization from explicit loss penalization to weight/sign decay (internally chained with the user-specified optimiser). The only breakage for users is that the losses reported in the history will no longer be penalized, because the penalty is not explicitly computed (#251)
248
251
For the release notes:
NeuralNetworkBinaryClasssifier
, an optimised form ofNeuralNetworkClassifier
for the special case of two target classes. UseFlux.σ
instead ofsoftmax
for the default finaliser (#248)rng
fromRandom.GLOBAL_RNG
toRandom.default_rng()
. Change the seeded RNG, obtained by specifying an integer value forrng
, fromMersenneTwister
toXoshiro
(#251)Short
builder so that therng
argument ofbuild(::Short, rng, ...)
is passed on to theDropout
layer, as these layers now support this on a GPU, at least forrng=Random.default_rng()
(#251)