FluxML / Optimisers.jl

Optimisers.jl defines many standard optimisers and utilities for learning loops.
https://fluxml.ai/Optimisers.jl
MIT License
72 stars 20 forks source link

`nothing` does not correspond to updating the state with a zero gradient. #140

Open CarloLucibello opened 1 year ago

CarloLucibello commented 1 year ago

As mentioned in https://github.com/FluxML/Optimisers.jl/pull/137#discussion_r1159911990, when a nothing gradient is encountered the apply! rule is not called at all and the state is not updated. So these two calls

Optimisers.update!(st, x, nothing)
Optimisers.update!(st, x, zero(x))

give different results. In the same discussion @mcabbott said

I suspect this is more an accident than a design, but I'm not sure it's an awful one. If you are doing ordinary AD and happen to get an array of zeros on some batch, probably you do want that to update the momenta etc. But you won't get nothing just because of the data in that batch. Instead, you'll get it because you are e.g. doing transfer learning, or the generator & discriminator on even/odd steps, or something like that. You will get nothing not for one array, but for a whole part of the model. And it seems like you probably don't want to update the momenta for the part of the model not being trained, but instead just ignore them completely.

but i think these examples should correspond to the opt_tree only having part of the model or to using different trees for discriminator and generator.

So in this issue I argue we should treat nothing exactly as semantically equivalent to a zero gradient, and define another type e.g. NoUpdate to signal that the apply! rule should not be called at all (so no momentum updates etc...)