Closed cindytsai closed 2 years ago
Because window creation in MPI RMA operation is a collective operation, every rank must be in the same state. Which means we have to enter this state at the very beginning of the IO. ( I didn't consider load balancing. But if we really need to, we only need to make sure every rank goes to the same function that will enter the same state.) To do this,
See cindytsai/yt
branch libyt-NonLocal
.
libyt yt frontend
, gather all the non-local grids at each rank's perspective.See cindytsai/yt
branch libyt-NonLocal
.
MPI_Win_fence
MPI_Win_create_dynamic
MPI_Gatherv
and MPI_Bcast
deal with big send count in yt_commit_grids.cpp
.
yt_rma
class gather_all_prepare_data
.yt_rma
class to yt_rma_field
, and construct yt_rma_particle
.
sorted
when getting fname_list
and ptf_c
dictionary, so that each fields to-get are unique.libyt yt frontend
if using yt_rma_field
or yt_rma_particle
.yt_rma
if anything bad really happens.yt
functionality (These functions did not support parallelism before we support getting non-local grids)
cell-centered
face-centered
derived_func
field_list
and particle_list
, they should exist till the very end.
field_list
and particle_list
should exist till the end of the inline-analysis. @cindytsai I like this approach. We can easily distinguish local and non-local grids when collecting the complete AMR structure.
Inline script:
import yt
yt.enable_parallelism()
def yt_inline():
ds = yt.frontends.libyt.libytDataset()
sc = yt.create_scene(ds, lens_type="perspective")
source = sc[0]
source.tfh.set_log(True)
source.tfh.grey_opacity = False
source.tfh.plot("transfer_function.png", profile_field=("gas", "density"))
sc.save("rendering.png", sigma_clip=4.0)
import yt
yt.enable_parallelism()
def yt_inline():
ds = yt.frontends.libyt.libytDataset()
sc = yt.create_scene(ds, lens_type="perspective")
source = sc[0]
source.tfh.set_log(True)
source.tfh.grey_opacity = False
source.tfh.plot("transfer_function.png", profile_field=("gas", "density"))
if yt.is_root():
sc.save("rendering.png", sigma_clip=4.0)
_read_fluid_selection
, because there is a if clause yt.is_root()
when saving figure.info
.)
import yt
yt.enable_parallelism()
yt.set_log_level("info")
def yt_inline():
ds = yt.frontends.libyt.libytDataset()
L = [1, 1, 0]
north_vector = [-1, 1, 0]
prj = yt.OffAxisProjectionPlot(ds, L, ("gas", "density"), north_vector=north_vector)
if yt.is_root():
prj.save()
import yt
yt.enable_parallelism()
def yt_inline():
ds = yt.frontends.libyt.libytDataset()
L = [1, 1, 0]
north_vector = [-1, 1, 0]
cut = yt.SlicePlot(ds, L, ("gas", "density"), north_vector=north_vector, center=[0.5, 0.5, 0.5])
if yt.is_root():
cut.save()
gamer
Plummerimport yt
yt.enable_parallelism()
def yt_inline():
ds = yt.frontends.libyt.libytDataset()
par = yt.ParticlePlot( ds, 'particle_position_x', 'particle_position_y', ("io", "particle_mass"), center='c' )
if yt.is_root():
par.save()
import yt
yt.enable_parallelism()
def yt_inline():
ds = yt.frontends.libyt.libytDataset()
par = yt.ParticlePlot( ds, 'particle_position_x', 'particle_position_y', ("io", "particle_mass"), center='c' )
par.save()
Output
Expected Output
gamer
Plummerimport yt
yt.enable_parallelism()
def yt_inline():
ds = yt.frontends.libyt.libytDataset()
par_prj = yt.ParticleProjectionPlot( ds, "z" )
if yt.is_root():
par.save()
import yt
yt.enable_parallelism()
def yt_inline():
ds = yt.frontends.libyt.libytDataset()
par_prj = yt.ParticleProjectionPlot( ds, "z" )
par.save()
MPI = 2
MPI = 1 (Expected)
Ask Non-Local Grid From Other MPI Rank
libyt
get non local grids from other rank, and pass it back toyt
, just like how derived fields did.