beacon-biosignals / LighthouseFlux.jl

An adapter package that implements Lighthouse's framework interface for Flux
MIT License
1 stars 1 forks source link

added `testmode!` wrappers #3

Closed anoojpatel closed 4 years ago

hannahilea commented 4 years ago

Cool---can you add some tests around this behavior, please?

anoojpatel commented 4 years ago

Looks like these tests are good tests for these, but testing fails with (what looks like) and unrelated test with @test counted == sum(1:limit). Going to investigate why thats the case.

anoojpatel commented 4 years ago

Ok, fixed it, it was because I was sampling from the seed which threw off that test. @hannahilea Ready for review!

hannahilea commented 4 years ago

Looks good! Is it worth additionally adding

@test Flux.istraining(model)

and

@test !Flux.istraining(model)

in the appropriate places?

anoojpatel commented 4 years ago

istraining() is like a global state for the current Flux package that is loaded. We want to avoid having a global "is the current Flux package going to be doing gradient updates, and utilizing dropout, and using batch variances." We want to rather test that we are setting specific model objects in testmode! or testmode!( mode=false).

hannahilea commented 4 years ago

istraining() is like a global state for the current Flux package that is loaded

Yikes. It's not per-model?? That seems fairly surprising. What if you have a multistage model (e.g., your cascading model) and you want one stage to be training but another to not be?

anoojpatel commented 4 years ago

Thats where this issue https://github.com/FluxML/Flux.jl/issues/909 came about, and they added back testmode! back :) to support for specific Flux objects

hannahilea commented 4 years ago

Oh, I see. From reading https://github.com/FluxML/Flux.jl/commit/069d22869313d2eb7dc04d64dc7c7a819643acf7 it looks like individual layers are either active or inactive, and so to check this for a whole model, it would be a relatively straightforward check of iterating through all the layers in model.chain and checking that they are either active or not. I wonder if it is worth doing this, mostly as a sanity check that Flux doesn't change this behavior?

Really what we want to enforce is that this active/inactive behavior is correct for any new layer that someone writes, but I don't think that's something that can be enforced at this LighthouseFlux layer. Although perhaps it could be added as a condition of the FluxClassifier constructor. @jrevels, what do you think?

jrevels commented 4 years ago

Really what we want to enforce is that this active/inactive behavior is correct for any new layer that someone writes, but I don't think that's something that can be enforced at this LighthouseFlux layer. Although perhaps it could be added as a condition of the FluxClassifier constructor. @jrevels, what do you think?

I'm not sure there's anything reasonable LighthouseFlux itself can/should provide programmatically in this vein AFAICT...it's on the model author to make sure their models/custom layers are implemented to have the desired behavior (e.g. custom layers overloading testmode! "correctly" where "correct" == whatever the model author intends for that layer). Open to suggestions though.

Should at least add relevant docstring notes that inform callers when/where/how LighthouseFlux uses Flux.testmode!.

Also should (probably?) add Flux.testmode!(model) to the FluxClassifier's inner constructor as well.