Closed ktangsali closed 8 months ago
/blossom-ci
@ktangsali in my opinion this example should demonstrate how the training workload for each subdomain can be distributed on different GPUs, which does not exist currently in your example. Also, if for any reason, you are interested in single GPU training with multiple domains, consider the FBPINNs approach, which essentially helps you get rid of the interface constraints. Your example can be easily extended to FBPINNs by removing the interface constraints, and switching from Heaviside to a smooth function (e.g. hyperbolic tangent) which overlaps with the opposite domain.
Because the distributed training component is missing here, should we call this example something other than XPINNs? Maybe domain decomposition?
Thanks @mnabian , I have addressed your comments and also renamed the example. I think these two examples are a great demonstration of just how to use different neural networks in different domains of the problem and couple them via interface/basis functions. The distributed part is a nice-to-have and we can add that in a separate PR.
/blossom-ci
Modulus Pull Request
Description
A simple example showing how to use domain decomposition using X-PINN style and FBPINN style in Modulus. The concept is demonstrated on a Lid-Driven Cavity case.
Domain is divided as shown below:
For FBPINN, the basis functions are used as below:
Results:
Checklist
Dependencies