PhasicFlow is a parallel C++ code for performing DEM simulations. It can run on shared-memory multi-core computational units such as multi-core CPUs or GPUs (for now it works on CUDA-enabled GPUs). The parallelization method mainly relies on loop-level parallelization on a shared-memory computational unit. You can build and run PhasicFlow in serial mode on regular PCs, in parallel mode for multi-core CPUs, or build it for a GPU device to off-load computations to a GPU. In its current statues you can simulate millions of particles (up to 80M particles tested) on a single desktop computer. You can see the performance tests of PhasicFlow in the wiki page.
MPI parallelization with dynamic load balancing is under development. With this level of parallelization, PhasicFlow can leverage the computational power of multi-gpu workstations or clusters with distributed memory CPUs. In summary PhasicFlow can have 6 execution modes:
You can build PhasicFlow for CPU and GPU executions. The latest release of PhasicFlow is v-0.1. Here is a complete step-by-step procedure for building phasicFlow-v-0.1..
You can find a full documentation of the code, its features, and other related materials on online documentation of the code
You can navigate into tutorials folder in the phasicFlow folder to see some simulation case setups. If you need more detailed discription, visit our wiki page tutorials.
PhasicFlowPlus is and extension to PhasicFlow for simulating particle-fluid systems using resolved and unresolved CFD-DEM. See the repository of this package.
If you are using PhasicFlow in your research or industrial work, cite the following article:
@article{NOROUZI2023108821,
title = {PhasicFlow: A parallel, multi-architecture open-source code for DEM simulations},
journal = {Computer Physics Communications},
volume = {291},
pages = {108821},
year = {2023},
issn = {0010-4655},
doi = {https://doi.org/10.1016/j.cpc.2023.108821},
url = {https://www.sciencedirect.com/science/article/pii/S0010465523001662},
author = {H.R. Norouzi},
keywords = {Discrete element method, Parallel computing, CUDA, GPU, OpenMP, Granular flow}
}