The source code for the paper L. Lu, P. Jin, G. Pang, Z. Zhang, & G. E. Karniadakis. Learning nonlinear operators via DeepONet based on the universal approximation theorem of operators. Nature Machine Intelligence, 3, 218-229, 2021.
Most code is written in Python 3, and depends on the deep learning package DeepXDE. Some code is written in Matlab (version R2019a).
OpNN
to DeepONet
and OpDataSet
to Triple
with other modifications. For DeepONet code using a more recent version of DeepXDE, please see https://github.com/lu-group/deeponet-fno.The installation may take between 10 minutes and one hour.
Antiderivative
main()
and ode_system()
based on the comments;A standard output is
Building operator neural network...
'build' took 0.104784 s
Generating operator data...
'gen_operator_data' took 20.495655 s
Generating operator data...
'gen_operator_data' took 168.944620 s
Compiling model...
'compile' took 0.265885 s
Initializing variables...
Training model...
Step Train loss Test loss Test metric
0 [1.09e+00] [1.11e+00] [1.06e+00]
1000 [2.57e-04] [2.87e-04] [2.76e-04]
2000 [8.37e-05] [9.99e-05] [9.62e-05]
...
50000 [9.98e-07] [1.39e-06] [1.09e-06]
Best model at step 46000:
train loss: 6.30e-07
test loss: 9.79e-07
test metric: [7.01e-07]
'train' took 324.343075 s
Saving loss history to loss.dat ...
Saving training data to train.dat ...
Saving test data to test.dat ...
Restoring model from model/model.ckpt-46000 ...
Predicting...
'predict' took 0.056257 s
Predicting...
'predict' took 0.012670 s
Test MSE: 9.269857471315847e-07
Test MSE w/o outliers: 6.972881784590493e-07
You can get the training and test errors in the end of the output.
The run time could be between several minutes to several hours depending on the parameters you choose, e.g., the dataset size and the number of iterations for training.
Stochastic ODE/PDE
main()
;main()
;1D Caputo fractional derivative
fractional
;2D fractional Laplacian
A standard output is
Training...
0 Train loss: 0.21926558017730713 Test loss: 0.22550159692764282
1000 Train loss: 0.0022761737927794456 Test loss: 0.0024939212016761303
2000 Train loss: 0.0004760705924127251 Test loss: 0.0005566366016864777
...
49000 Train loss: 1.2885914202342974e-06 Test loss: 1.999963387788739e-06
50000 Train loss: 1.1382834372852813e-06 Test loss: 1.8525416862757993e-06
Done!
'run' took 747.5421471595764 s
Best model at iteration 50000:
Train loss: 1.1382834372852813e-06 Test loss: 1.8525416862757993e-06
You can get the training and test errors in the end of the output.
The run time could be between several minutes to several hours depending on the parameters you choose, e.g., the dataset size and the number of iterations for training.
The instructions for running each case are as follows.
Antiderivative
in Demo. You need to modify the function main()
in deeponet_pde.py.Antiderivative
in Demo. You need to modify the functions main()
and ode_system()
in deeponet_pde.py.Antiderivative
in Demo. You need to modify the functions main()
and ode_system()
in deeponet_pde.py.Antiderivative
in Demo. You need to modify the function main()
in deeponet_pde.py.Antiderivative
in Demo. You need to modify the functions main()
in deeponet_pde.py, run()
in deeponet_pde.py, CVCSystem()
in system.py, and solve_CVC()
in CVC_solver.py to run each case.Antiderivative
in Demo. You need to modify the function main()
in deeponet_pde.py.If you use this code for academic research, you are encouraged to cite the following paper:
@article{lu2021learning,
title = {Learning nonlinear operators via {DeepONet} based on the universal approximation theorem of operators},
author = {Lu, Lu and Jin, Pengzhan and Pang, Guofei and Zhang, Zhongqiang and Karniadakis, George Em},
journal = {Nature Machine Intelligence},
volume = {3},
number = {3},
pages = {218--229},
year = {2021}
}
To get help on how to use the data or code, simply open an issue in the GitHub "Issues" section.
This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.