lululxvi / deepxde

A library for scientific machine learning and physics-informed learning
https://deepxde.readthedocs.io
GNU Lesser General Public License v2.1
2.64k stars 741 forks source link

some questions; info for reading #307

Closed ian204 closed 2 years ago

ian204 commented 3 years ago

Hello Lu, Let me first express my thanks to you for bringing to us this package which simplifies the application of NN to PDEs. My interests are on solving time-dependent NS equations, with various BC and IC. I have read your guide and FAQ list but I need some extra readings for understanding better DeepXDE package. If you, or anyone else in this group, can answer/comment my following questions I would appreciate very much. Thank you. Ian

Q1: Where I can find proper description of the syntax for using "lambda x, or lambda x, y or lambda_" ? This would help to better understand the implementation of soft/hard BC/IC. Q2: How can I use my optimization function to minimize the loss function? Q3: How can I access the gradient of loss function wrt to all variables (weights and biases) or with respect to a subset of them? Q4: How can I access the Hessian of the loss function wrt to a subset of variables (weights and biases)? Q5: Does DeepXDE have the option to move the geometry? Q6: Is there any explicit example for time-dependent NS equations (2D/3D)?

lululxvi commented 3 years ago
ian204 commented 3 years ago

Thank you for the comments, Lu. I have added some further explanation as it seems that I didn't describe properly my questions in my original message.

Ian

For Q2: Definitely, I will write my own optimization function - this is not object of my question. The question is how to call this new function instead of the ones (adam, ...) that tensorflow uses. For Q5: I mean how one could modify the domain. Think of NS equation in a moving domain ... For Q6: Actually, it seems the the code with Beltrami flow is a NS code, just with specific BC/IC

lululxvi commented 3 years ago

Here are some details of the optimizer: https://www.tensorflow.org/api_docs/python/tf/compat/v1/train/Optimizer#apply_gradients You can use compute_gradients to get the gradients, and I think you need to implement your own apply_gradients. So basically you need to write a subclass of tf.compat.v1.train.Optimizer

For moving domain, you can use a sufficiently large domain that covers your true domain. Then instead of asking DeepXDE to sample the training points, you can sample the points X by yourself, and then define PDE(..., anchors=X). Or you can implement a subclass of Geometry

ian204 commented 3 years ago

Thank you Lu. It gives me some ideas to work on.