lululxvi / deepxde

A library for scientific machine learning and physics-informed learning
https://deepxde.readthedocs.io
GNU Lesser General Public License v2.1
2.56k stars 730 forks source link

Multi-fidelity neural network with uncertainty quantification #854

Open AmosJoseph opened 2 years ago

AmosJoseph commented 2 years ago

Hi, I wonder if Multi-fidelity neural network with uncertainty quantification can be implemented as below:

dropout_rate = 0.01 net = dde.nn.MfNN( [4] + [20] 4 + [1], [20] 3 + [1], activation, initializer, regularization=regularization, dropout_rate=dropout_rate, )

model = dde.Model(data, net) uncertainty = dde.callbacks.DropoutUncertainty(period=1000) model.compile("adam", lr=0.001, metrics=["l2 relative error"])

TypeError Traceback (most recent call last) ~\AppData\Local\Temp\ipykernel_9572\1029792569.py in 33 initializer, 34 regularization=regularization, ---> 35 dropout_rate=dropout_rate, 36 ) 37

TypeError: init() got an unexpected keyword argument 'dropout_rate'

I comment dropout_rate=dropout_rate, the code can run as below: but it seems wrong by doing this according to the example func_uncertainty.py (https://github.com/lululxvi/deepxde/blob/master/examples/function/func_uncertainty.py).

dropout_rate = 0.01 net = dde.nn.MfNN( [4] + [20] 4 + [1], [20] 3 + [1], activation, initializer, regularization=regularization,

dropout_rate=dropout_rate,

)

model = dde.Model(data, net) uncertainty = dde.callbacks.DropoutUncertainty(period=1000) model.compile("adam", lr=0.001, metrics=["l2 relative error"])

AmosJoseph commented 2 years ago

dropout_rate is to mitigrate over-fitting.

What's the relationship between dropout_rate and DropoutUncertainty?

Must dropout_rate be used for uncertainty quantification?

lululxvi commented 2 years ago

dropout_rate is used for dropout. dde.callbacks.DropoutUncertainty is MC dropout for UQ, which requires dropout_rate.