tum-pbs / PhiFlow

A differentiable PDE solving framework for machine learning
MIT License
1.43k stars 193 forks source link

Scaling PhiFlow across multiple GPUs #107

Open joyjitkundu032 opened 1 year ago

joyjitkundu032 commented 1 year ago

Is there anyway to scale PhiFlow across multiple GPUs?

holl- commented 1 year ago

Multi-GPU is not officially supported as of yet. Here is what you can do:

You can list all available GPUs using backend.default_backend().list_devices('GPU'). Then you can set one as the default devices using backend.default_backend().set_default_device(). All tensor initializers will now allocate on this GPU.

You can use one of the native backend functions, such as Jax's pmap to parallelize your function. This currently requires you to pass only native tensors to the function.

Multi-GPU support may be added in the future but it's not a priority for us right now. Contributions are welcome!