SciML / DiffEqFlux.jl

Pre-built implicit layer architectures with O(1) backprop, GPUs, and stiff+non-stiff DE solvers, demonstrating scientific machine learning (SciML) and physics-informed machine learning methods
https://docs.sciml.ai/DiffEqFlux/stable
MIT License
871 stars 157 forks source link

DiffEqFlux Layers don't satisfy Lux API #727

Closed avik-pal closed 1 year ago

avik-pal commented 2 years ago

The DiffEqFlux Layers need to satisfy https://lux.csail.mit.edu/dev/api/core/#Lux.AbstractExplicitLayer else the parameters/states returned from Lux.setup be incorrect. As pointed out in slack

julia> ps, st = Lux.setup(rng, Chain(node,Dense(2=>3)))
((layer_1 = NamedTuple(), layer_2 = (weight = Float32[0.11987843 -0.1679378; 0.36991563 0.41324985; 0.73272866 0.7062624], bias = Float32[0.0; 0.0; 0.0;;])), (layer_1 = NamedTuple(), layer_2 = NamedTuple()))

ps.layer_1 should not be an empty NamedTuple

https://lux.csail.mit.edu/dev/manual/interface/ -- is the most recent manual for the interface

YichengDWu commented 2 years ago

Should be easy

initialparameters(rng::AbstractRNG, node::NeuralODE) = initialparameters(rng, node.model) 
initialstates(rng::AbstractRNG, node::NeuralODE) = initialstates(rng, node.model)
parameterlength(node::NeuralODE) = parameterlength(node.model)
statelength(node::NeuralODE) = statelength(node.model)

To make setup work not only for Chain but also directly on NeuralODE, we need to add

function setup(rng::AbstractRNG, node::NeuralODE)
    return (initialparameters(rng, node), initialstates(rng, node))
end
ChrisRackauckas commented 2 years ago

Are you supposed to overload setup? I assume that should just follow from the interface.

avik-pal commented 2 years ago

You just need to define

initialparameters(rng::AbstractRNG, node::NeuralODE) = initialparameters(rng, node.model) 
initialstates(rng::AbstractRNG, node::NeuralODE) = initialstates(rng, node.model)
ChrisRackauckas commented 2 years ago

We should put an abstract type on all of the AbstractNeuralDE types and then overload from there.

YichengDWu commented 2 years ago

You just need to define

initialparameters(rng::AbstractRNG, node::NeuralODE) = initialparameters(rng, node.model) 
initialstates(rng::AbstractRNG, node::NeuralODE) = initialstates(rng, node.model)

For it to work yes. Would it be nicer if the number of parameters could be printed automatically?

YichengDWu commented 2 years ago

Are you supposed to overload setup? I assume that should just follow from the interface.

I was assuming NeuralODE was not a subtype of AbstractExplicitLayer. Should be nonnecessary if you are going to subtype it

avik-pal commented 2 years ago

No even if you are not subtying initialparameters and initialstates are the only functions that need to be mandatorily implemented, parameterlength and statelength are optional. setup should never be extended

YichengDWu commented 2 years ago

I would appreciate it if you could help me understand two questions:

  1. Is it still mandatory to implement initialstates if I just have one layer and just need to return NameTuple()? I have implemented some layers without it. Looks like they are just calling initialstates(::AbstractRNG, ::Any) = NamedTuple() in the source code.
  2. What are the bad consequences of extending setup?
avik-pal commented 2 years ago

It is meant to satisfy an interface.

  1. You are right, the default for initialstates is NamedTuple(), but this is undocumented so this can be changed without it being considered breaking.
  2. Extending setup is not going to solve problems for most people and sets false expectation. For example, if you extend setup for a layer which is contained inside another layer. Calling Lux.setup on the outer layer, will cause the parameters and states for the internal custom layer to have empty parameters and states.
YichengDWu commented 2 years ago

Highly appreciate the clarification you made.

ChrisRackauckas commented 2 years ago

Flux doesn't care about the subtyping but Lux does, so we should subtype for Lux and then also make it a functor and we're 👍.

ChrisRackauckas commented 2 years ago

Copying over from https://github.com/SciML/DiffEqFlux.jl/pull/735. All should be an AbstractExplicitLayer, which means they should do things exactly like Dense. They should have one state, take in a state, and return a state. They should take in a neural network definition and give you back a state from setup. Basically, it should act exactly like Dense does, and be able to perfectly swap in without any other code changes, and if not it's wrong. The only thing that should be different is the constructor for the layer.

@Abhishek-1Bhatt let me know if you need me to do the first one.

avik-pal commented 2 years ago

Once it gets built, http://lux.csail.mit.edu/previews/PR70/manual/interface should describe the recommended Lux Interface. For DiffEqFlux, everything should really be a subtype of http://lux.csail.mit.edu/stable/api/core/#Lux.AbstractExplicitContainerLayer, and there would be no need to define initialparameters and initialstates. (Just a small heads up there will be a small breaking change for the Container Layers in v0.5 (which is still far out) )

ba2tro commented 2 years ago

Ahh yes, I had thought about it sometime ago https://julialang.slack.com/archives/C7T968HRU/p1655536943724979?thread_ts=1655535510.205359&cid=C7T968HRU but we didn't discuss it so ended up subtyping to AbstractExplicitLayer

ChrisRackauckas commented 1 year ago

Done in https://github.com/SciML/DiffEqFlux.jl/pull/750