underworldcode / underworld2

underworld2: A parallel, particle-in-cell, finite element code for Geodynamics.
http://www.underworldcode.org/
Other
168 stars 58 forks source link

Particle swarm not working in parallel #635

Closed coopergeodynamics closed 1 year ago

coopergeodynamics commented 1 year ago

Hi All,

I'm trying to incorporate a couple of sampling tracers within my models that will advect with a continent and take temperature with depth samples at specified times (see below). I got it work (yay!), but it only only when it runs on one process (not yay..). When I run it in parallel, I get index errors, which are relating to somehow an array getting set to size 0 (specifically, "IndexError: index 0 is out of bounds for axis 0 with size 0").

Now I think I understand "why" this is happening - if the specified particles in the swam are outside of the local domain, then they don't "exist", but I am completely stumped on how to resolve this issue. I've tried forcing the portion to only run on the head node (no go). I've tried telling it to only run if we're within the location space of the particle swarm (also no go). Yes, we could just run on a single processor, but it'd be great to figure this out for if/when we move to higher res models, for which serial would be painful/not doable. Any help would be amazing!

Thanks,

Katie

Bit of code relevant to the particle swarm (happy to share entire py file if useful):

tracerParticles = uw.swarm.Swarm(mesh) advector_tracer = uw.systems.SwarmAdvector( swarm=tracerParticles, velocityField=velocityField, order=2 )

Set up location of tracers

tracerCoords = np.array([[(cont1X1+(cont1X2-cont1X1)/2.0), 1.0],[(cont1X1+(cont1X2-cont1X1)/4.0), 1.0]]) tracerParticles.add_particles_with_coordinates(tracerCoords) tracerLocation = tracerParticles.add_variable( dataType="int", count=1 ) tracerLocation.data[:,0] = np.arange(tracerParticles.particleLocalCount)

set up sampling call

def sampleTracers(): xpos1_temp = tracerParticles.data[0,0] xpos2_temp = tracerParticles.data[1,0] arrY_temp = np.linspace(0.,1.,n) arrT1_temp = np.zeros(n) arrT2_temp = np.zeros(n) for i in range(n): arrT1_temp[i] = temperatureField.evaluate_global((xpos1_temp,arrY_temp[i])) arrT2_temp[i] = temperatureField.evaluate_global((xpos2_temp,arrY_temp[i])) return xpos1_temp, arrT1_temp, xpos2_temp, arrT2_temp, arrY_temp

xpos1, arrT1, xpos2, arrT2, arrY = sampleTracers()

bknight1 commented 1 year ago

Hi @drcoopa

Are you using UW2 or UWGeodynamics? There are slightly different ways to construct passive tracers between the two. From how you add the particle swarm I'll assume UW2 for now.

You could try:

### this is the x and y coord of each particle
tracerCoords = np.array([[(cont1X1+(cont1X2-cont1X1)/2.0), 1.0],[(cont1X1+(cont1X2-cont1X1)/4.0), 1.0]])

### this sets up the swarm object
tracerParticles = uw.swarm.Swarm(mesh=mesh, particleEscape=False) ### set to true if particles will be advected out of the domain
### this creates a particle index when adding the particles with the coordinates
particle_index = tracerParticles.add_particles_with_coordinates(tracerCoords)

### you can then add this to the particles if you want as a particleID
tracerLocation = tracerParticles.add_variable( dataType="int", count=1 )
tracerLocation.data[:,0] = particle_index

### add a field to save the interpolated temperature field
tracerTemp = tracerParticles.add_variable( dataType="int", count=1 )

### To sample, the evaluate function can take the global x and y coords and return the values, which you can put straight into the tracer variable
tracerTemp.data[:,] = temperatureField.evaluate(tracerParticles)

You can then save the tracers coordinates and fields similar to saving a normal swarm. Let me know if this works or if you need any other help.

coopergeodynamics commented 1 year ago

I'm using UW2. Could you explain to me why the above approach would work in parallel as opposed to my original code? I'm asking as that will help me (and perhaps others who might be reading) to learn how to approach using passive tracers within UW2.

bknight1 commented 1 year ago

That's fine! The UW functions remove the complexity that is introduced by the decomposition in parallel, where each CPU is essentially trying to execute each line of code independently. The code utilises UW functions to distribute the passive tracers across multiple CPUs (in parallel) as well as storing the particleID and temperature data which is acquired using the UW evaluate function, with both then stored on the swarm as a variable. The UW functions deal with the different data structures and sizes over multiple CPUs 'under the hood'.

I wasn't able to run your code as some parameters were missing (the parameters required for the coords and n in the sampling function), but I assume the issue was arising due to this line of code tracerLocation.data[:,0] = np.arange(tracerParticles.particleLocalCount) as this would produce the index error when trying to update the tracerLocation values where no passive tracers exist.

Happy to explain further if needed.

Cheers

coopergeodynamics commented 1 year ago

Thanks, Ben! That is really helpful. I’ll give this a go and let you know.

On Nov 6, 2022, at 8:24 PM, Ben Knight @.***> wrote:

 That's fine! The UW functions remove the complexity that is introduced by the decomposition in parallel, where each CPU is essentially trying to execute each line of code independently. The code utilises UW functions to distribute the passive tracers across multiple CPUs (in parallel) as well as storing the particleID and temperature data which is acquired using the UW evaluate function, with both then stored on the swarm as a variable. The UW functions deal with the different data structures and sizes over multiple CPUs 'under the hood'.

I wasn't able to run your code as some parameters were missing (the parameters required for the coords and n in the sampling function), but I assume the issue was arising due to this line of code tracerLocation.data[:,0] = np.arange(tracerParticles.particleLocalCount) as this would produce the index error when trying to update the tracerLocation values where no passive tracers exist.

Happy to explain further if needed.

Cheers

— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you were mentioned.

bknight1 commented 1 year ago

All good @drcoopa, let me know how you get on! Happy to help out further if needed

bknight1 commented 1 year ago

Hey @drcoopa, how did you get on? I'll close this issue for now but feel free to re-open it if you're still having issues!

Thanks

coopergeodynamics commented 1 year ago

Hi Ben! Sorry to just now get back to this issue, but so it goes (ran into end of the year duties).

But bad news...I followed your above approach, but ran into a similar issue of it not working in parallel.

Here's the error:

tracerLocation.data[:,0] = particle_index

ValueError: could not broadcast input array from shape (2) into shape (0)

So, it seems like the same issue of the passive particles not being found...Any other ideas?

bknight1 commented 1 year ago

Sorry @drcoopa

Try: tracerLocation.data[:,0] = particle_index[particle_index > -1]

This should work as the particle_index returns -1 for particles that aren't on that processor. So it'll only update the particle index values from that processor. Let me know if it works.

Cheers

coopergeodynamics commented 1 year ago

well, that fixed that error, but I'm still running into similar issues when I try to sample the swarm later. Assigning temperature values to the particle swarm directly doesn't quite get me what I need as I really want to sample the temperature field with depth at a particular x-coordinate that advects with time. Regardless, I tried it that way as well and I still get empty array issues when I try to assign data from the swarm.

bknight1 commented 1 year ago

Sorry about that, we'll get there in the end! I've gone back to the original code to see if we could tweak it and come up with the following:

### add this function in the solver loop
def sampleTracers(outfile0, outfile1, timestep, n, variable=temperatureField,):
    '''
    outfile0 = name of file for first profile  (also include the path if it differs from the script running location)
    outfile1 = name of file for second profile (also include the path if it differs from the script running location)
    timestep = timestep of model (is converted to string when specifiying the filename to save the data)
    n        = number of particles to sample
    variable = name of variable to sample

    output:
        saves a np file and h5 file of the output data (can comment out/remove one)
    '''
    if len(tracerParticles.data[:,0]) > 0:
        # xpos1_temp = tracerParticles.data[0,0]
        # xpos2_temp = tracerParticles.data[1,0]

        ### create the x coordinate array, repeats the x coord n times
        xpos1_temp = np.repeat(tracerParticles.data[0,0], n)
        xpos2_temp = np.repeat(tracerParticles.data[1,0], n)

        ### create y coordinate array
        arrY_temp = np.linspace(0.,1.,n)

        ### evaluate global (which reduces value to root proc)

        ### sample first x coords
        arrT1_temp = variable.evaluate_global(np.vstack([xpos1_temp, arrY_temp]).T)
        ### sample second x coords
        arrT2_temp = variable.evaluate_global(np.vstack([xpos2_temp, arrY_temp]).T)

        ### save the data on root proc
        if uw.mpi.rank == 0:
        ### numpy version
            np.save(outfile0 +str(timestep), np.vstack([xpos1_temp, arrY_temp, arrT1_temp]).T) ### save the first temp profile coords + temperautre data

            np.save(outfile1 + str(timestep), np.vstack([xpos1_temp, arrY_temp, arrT1_temp]).T) ### save the second temp profile coords + temperature data

        ### h5 version
            with _h5py.File(outfile0+str(timestep)+'.h5', 'w') as hf:
                hf.create_dataset("data",  data=np.vstack([xpos1_temp, arrY_temp, arrT1_temp]).T) ### save the first temp profile coords + temperautre data

            with _h5py.File(outfile1+str(timestep)+'.h5', 'w') as hf:
                hf.create_dataset("data",  data=np.vstack([xpos2_temp, arrY_temp, arrT2_temp]).T) ### save the first temp profile coords + temperautre data
coopergeodynamics commented 1 year ago

Hi again! Okay, so I've tried many variations of the above to get it to work, but to no avail. While limiting the call to evaluate the tracer swarm to when the data has values ( the greater than zero condition) does eliminate the errors about empty arrays, nothing is ever written b/c the second condition of rank==0 is never met (since it's typically on another processor where the tracer data has value). Hence why I've tried various permutations of the above to see if I can find a magic sweet spot where values get shared globally (so head processor can pick it up), but no go. Any other suggestions?

bknight1 commented 1 year ago

Sorry @drcoopa, I think the .evaulate_global needs to be done on all procs. Try this, I've added a few print statements so we can also check where the function hangs in future.

import h5py
### add this function in the solver loop
def sampleTracers(outfile0, outfile1, timestep, n, variable=temperatureField,):
    '''
    outfile0 = name of file for first profile  (also include the path if it differs from the script running location)
    outfile1 = name of file for second profile (also include the path if it differs from the script running location)
    timestep = timestep of model (is converted to string when specifiying the filename to save the data)
    n        = number of particles to sample
    variable = name of variable to sample

    output:
        saves a np file and h5 file of the output data (can comment out/remove one)
    '''
    ### create variables on all procs
    x0 = None
    x1 = None

    try:
        if len([tracerParticles.data[0,0]]) > 0:
            ### get the proc
            proc = uw.mpi.rank
            ### get the x coord
            x0 = tracerParticles.data[0,0]
            ### bcast xcoord to all procs
            x0 = uw.mpi.comm.bcast(x0, root=proc)
    except IndexError:
        #### index error on procs that don't contain the coord
        pass

    print('finish getting x0 value')

    try:
        if len([tracerParticles.data[1,0]]) > 0:
            ### get the proc
            proc = uw.mpi.rank
            ### get the x coord
            x1 = tracerParticles.data[1,0]
            ### bcast to all procs
            x1 = uw.mpi.comm.bcast(x1, root=proc)

    except IndexError:
        #### index error on procs that don't contain the coord
        pass

    print('finish getting x1 value')

     ### create the x coordinate array, repeats the x coord n times    
    xpos1_temp = np.repeat(x0, n)
    xpos2_temp = np.repeat(x1, n)

    ### create y coordinate array
    arrY_temp = np.linspace(0.,1.,n)

    ### evaluate global (which reduces value to root proc)

    ### sample first x coords
    arrT1_temp = variable.evaluate_global(np.vstack([xpos1_temp, arrY_temp]).T)
    print('finish evaluating arrT1_temp')
    ### sample second x coords
    arrT2_temp = variable.evaluate_global(np.vstack([xpos2_temp, arrY_temp]).T)
    print('finish evaluating arrT2_temp')

    ### save the data on root proc
    if uw.mpi.rank == 0:
    ### numpy version
        np.save(outfile0 +str(timestep), np.vstack([xpos1_temp, arrY_temp, arrT1_temp]).T) ### save the first temp profile coords + temperautre data

        np.save(outfile1 + str(timestep), np.vstack([xpos1_temp, arrY_temp, arrT1_temp]).T) ### save the second temp profile coords + temperature data

        print('finish save to numpy file')

    ### h5 version
        with h5py.File(outfile0+str(timestep)+'.h5', 'w') as hf:
            hf.create_dataset("data",  data=np.vstack([xpos1_temp, arrY_temp, arrT1_temp]).T) ### save the first temp profile coords + temperautre data

        with h5py.File(outfile1+str(timestep)+'.h5', 'w') as hf:
            hf.create_dataset("data",  data=np.vstack([xpos2_temp, arrY_temp, arrT2_temp]).T) ### save the first temp profile coords + temperautre data

        print('finish save to h5 file')

I had to edit the code as I realised an IndexError would occur. This should now work

coopergeodynamics commented 1 year ago

Unfortunately, that didn't work either.I had high hopes for the broadcast method, but unfortunately, it doesn't seem to actually broadcast the values b/c x0 maintains a "None" value (I added that to the first print command), which then mucks up the evaluate.

On Thu, Jan 19, 2023 at 4:50 PM Ben Knight @.***> wrote:

Sorry @drcoopa https://github.com/drcoopa, I think the .evaulate_global needs to be done on all procs. Try this, I've added a few print statements so we can also check where the function hangs in future.

import h5py

add this function in the solver loop

def sampleTracers(outfile0, outfile1, timestep, n, variable=temperatureField,): ''' outfile0 = name of file for first profile (also include the path if it differs from the script running location) outfile1 = name of file for second profile (also include the path if it differs from the script running location) timestep = timestep of model (is converted to string when specifiying the filename to save the data) n = number of particles to sample variable = name of variable to sample

output:
    saves a np file and h5 file of the output data (can comment out/remove one)
'''
### create variables on all procs
x0 = None
x1 = None

if len([tracerParticles.data[0,0]]) > 0:
    ### get the proc
    proc = uw.mpi.rank
    ### get the x coord
    x0 = tracerParticles.data[0,0]
    ### bcast xcoord to all procs
    x0 = uw.mpi.comm.bcast(x0, root=proc)

print('finish getting x0 value')

if len([tracerParticles.data[1,0]]) > 0:
    ### get the proc
    proc = uw.mpi.rank
    ### get the x coord
    x1 = tracerParticles.data[1,0]
    ### bcast to all procs
    x1 = uw.mpi.comm.bcast(x1, root=proc)

print('finish getting x1 value')

 ### create the x coordinate array, repeats the x coord n times
xpos1_temp = np.repeat(x0, n)
xpos2_temp = np.repeat(x1, n)

### create y coordinate array
arrY_temp = np.linspace(0.,1.,n)

### evaluate global (which reduces value to root proc)

### sample first x coords
arrT1_temp = variable.evaluate_global(np.vstack([xpos1_temp, arrY_temp]).T)
print('finish evaluating arrT1_temp')
### sample second x coords
arrT2_temp = variable.evaluate_global(np.vstack([xpos2_temp, arrY_temp]).T)
print('finish evaluating arrT2_temp')

### save the data on root proc
if uw.mpi.rank == 0:
### numpy version
    np.save(outfile0 +str(timestep), np.vstack([xpos1_temp, arrY_temp, arrT1_temp]).T) ### save the first temp profile coords + temperautre data

    np.save(outfile1 + str(timestep), np.vstack([xpos1_temp, arrY_temp, arrT1_temp]).T) ### save the second temp profile coords + temperature data

    print('finish save to numpy file')

### h5 version
    with h5py.File(outfile0+str(timestep)+'.h5', 'w') as hf:
        hf.create_dataset("data",  data=np.vstack([xpos1_temp, arrY_temp, arrT1_temp]).T) ### save the first temp profile coords + temperautre data

    with h5py.File(outfile1+str(timestep)+'.h5', 'w') as hf:
        hf.create_dataset("data",  data=np.vstack([xpos2_temp, arrY_temp, arrT2_temp]).T) ### save the first temp profile coords + temperautre data

    print('finish save to h5 file')

— Reply to this email directly, view it on GitHub https://github.com/underworldcode/underworld2/issues/635#issuecomment-1397794424, or unsubscribe https://github.com/notifications/unsubscribe-auth/ABBZWRHP4J6SHW65UK7I7OTWTHOO5ANCNFSM6AAAAAARXRQE74 . You are receiving this because you were mentioned.Message ID: @.***>

coopergeodynamics commented 1 year ago

or it could be the conditional never is true, so the value is never broadcast to begin with...

On Thu, Jan 19, 2023 at 8:21 PM Katie Cooper @.***> wrote:

Unfortunately, that didn't work either.I had high hopes for the broadcast method, but unfortunately, it doesn't seem to actually broadcast the values b/c x0 maintains a "None" value (I added that to the first print command), which then mucks up the evaluate.

On Thu, Jan 19, 2023 at 4:50 PM Ben Knight @.***> wrote:

Sorry @drcoopa https://github.com/drcoopa, I think the .evaulate_global needs to be done on all procs. Try this, I've added a few print statements so we can also check where the function hangs in future.

import h5py

add this function in the solver loop

def sampleTracers(outfile0, outfile1, timestep, n, variable=temperatureField,): ''' outfile0 = name of file for first profile (also include the path if it differs from the script running location) outfile1 = name of file for second profile (also include the path if it differs from the script running location) timestep = timestep of model (is converted to string when specifiying the filename to save the data) n = number of particles to sample variable = name of variable to sample

output:
    saves a np file and h5 file of the output data (can comment out/remove one)
'''
### create variables on all procs
x0 = None
x1 = None

if len([tracerParticles.data[0,0]]) > 0:
    ### get the proc
    proc = uw.mpi.rank
    ### get the x coord
    x0 = tracerParticles.data[0,0]
    ### bcast xcoord to all procs
    x0 = uw.mpi.comm.bcast(x0, root=proc)

print('finish getting x0 value')

if len([tracerParticles.data[1,0]]) > 0:
    ### get the proc
    proc = uw.mpi.rank
    ### get the x coord
    x1 = tracerParticles.data[1,0]
    ### bcast to all procs
    x1 = uw.mpi.comm.bcast(x1, root=proc)

print('finish getting x1 value')

 ### create the x coordinate array, repeats the x coord n times
xpos1_temp = np.repeat(x0, n)
xpos2_temp = np.repeat(x1, n)

### create y coordinate array
arrY_temp = np.linspace(0.,1.,n)

### evaluate global (which reduces value to root proc)

### sample first x coords
arrT1_temp = variable.evaluate_global(np.vstack([xpos1_temp, arrY_temp]).T)
print('finish evaluating arrT1_temp')
### sample second x coords
arrT2_temp = variable.evaluate_global(np.vstack([xpos2_temp, arrY_temp]).T)
print('finish evaluating arrT2_temp')

### save the data on root proc
if uw.mpi.rank == 0:
### numpy version
    np.save(outfile0 +str(timestep), np.vstack([xpos1_temp, arrY_temp, arrT1_temp]).T) ### save the first temp profile coords + temperautre data

    np.save(outfile1 + str(timestep), np.vstack([xpos1_temp, arrY_temp, arrT1_temp]).T) ### save the second temp profile coords + temperature data

    print('finish save to numpy file')

### h5 version
    with h5py.File(outfile0+str(timestep)+'.h5', 'w') as hf:
        hf.create_dataset("data",  data=np.vstack([xpos1_temp, arrY_temp, arrT1_temp]).T) ### save the first temp profile coords + temperautre data

    with h5py.File(outfile1+str(timestep)+'.h5', 'w') as hf:
        hf.create_dataset("data",  data=np.vstack([xpos2_temp, arrY_temp, arrT2_temp]).T) ### save the first temp profile coords + temperautre data

    print('finish save to h5 file')

— Reply to this email directly, view it on GitHub https://github.com/underworldcode/underworld2/issues/635#issuecomment-1397794424, or unsubscribe https://github.com/notifications/unsubscribe-auth/ABBZWRHP4J6SHW65UK7I7OTWTHOO5ANCNFSM6AAAAAARXRQE74 . You are receiving this because you were mentioned.Message ID: @.***>

bknight1 commented 1 year ago

It's because of how the passive tracer data is stored on each proc. This looks long-winded but it does work (I've attached a script to test it too on multiple procs). Essentially we needed to get both of the coordinates of the swarm onto all processors (globalPassiveSwarmCoords() function) and then do the evaluate global function.

### this gets the coords from the passive swarm onto all procs
def globalPassiveSwarmCoords(swarm, bcast=True, rootProc=0):
    '''
    Distribute passive swarm coordinate data to all CPUs (bcast = True) or the rootProc, (bcast = False)

    Used for the analysis of coordinates of the swarm that may move between processors

    '''

    comm = uw.mpi.comm
    rank = uw.mpi.rank
    size = uw.mpi.size

    if len(swarm.data) > 0:
        x_local = np.ascontiguousarray(swarm.data[:,0].copy())
        y_local = np.ascontiguousarray(swarm.data[:,1].copy())
        if swarm.data.shape[1] == 3:
            z_local = np.ascontiguousarray(swarm.data[:,2].copy())
        else:
            z_local = np.zeros_like(swarm.data[:,0])*np.nan

    else:
        x_local = np.array([np.nan], dtype='float64')
        y_local = np.array([np.nan], dtype='float64')
        z_local = np.array([np.nan], dtype='float64')

    ### Collect local array sizes using the high-level mpi4py gather
    sendcounts = np.array(comm.gather(len(x_local), root=rootProc))

    if rank == rootProc:
        x_global = np.zeros((sum(sendcounts)), dtype='float64')
        y_global = np.zeros((sum(sendcounts)), dtype='float64')
        z_global = np.zeros((sum(sendcounts)), dtype='float64')
    else:
        x_global = None
        y_global = None
        z_global = None

    comm.barrier()

    ## gather x values, can't do them together
    comm.Gatherv(sendbuf=x_local, recvbuf=(x_global, sendcounts), root=rootProc)
    ## gather y values
    comm.Gatherv(sendbuf=y_local, recvbuf=(y_global, sendcounts), root=rootProc)

    ## gather z values
    comm.Gatherv(sendbuf=z_local, recvbuf=(z_global, sendcounts), root=rootProc)

    comm.barrier()

    def sortCoords():
        ## Put back into combined array
        Coords = np.zeros(shape=(len(x_global),3))*np.nan
        Coords[:,0] = x_global
        Coords[:,1] = y_global
        Coords[:,2] = z_global

        comm.barrier()

        ### remove rows with NaN
        Coords = Coords[~np.isnan(Coords[:,0])]
        ### remove cols with NaN
        Coords = Coords[:, ~np.isnan(Coords).all(axis=0)]

        comm.barrier()

        return Coords

    if bcast == True:
        #### make swarm coords available on all processors
        x_global = comm.bcast(x_global, root=rootProc)
        y_global = comm.bcast(y_global, root=rootProc)
        z_global = comm.bcast(z_global, root=rootProc)

        comm.barrier()

        Coords = sortCoords()

        comm.barrier()

    else:
        ### swarm coords only available on root processor
        if rank == rootProc:
            Coords = sortCoords()

        comm.barrier()

    return Coords

### add this function in the solver loop
import h5py
def sampleTracers(outfile0, outfile1, timestep, n, variable=temperatureField,):
    '''
    outfile0 = name of file for first profile  (also include the path if it differs from the script running location)
    outfile1 = name of file for second profile (also include the path if it differs from the script running location)
    timestep = timestep of model (is converted to string when specifiying the filename to save the data)
    n        = number of particles to sample
    variable = name of variable to sample

    output:
        saves a np file and h5 file of the output data (can comment out/remove one)
    '''
    # if len(tracerParticles.data[:,0]) > 0:
    # xpos1_temp = tracerParticles.data[0,0]
    # xpos2_temp = tracerParticles.data[1,0]
    coords = globalPassiveSwarmCoords(tracerParticles)

    xpos1_temp = np.repeat(coords[0,0], n)
    xpos2_temp = np.repeat(coords[1,0], n)

    ### create y coordinate array
    arrY_temp = np.linspace(0.,1.,n)

    ### evaluate global (which reduces value to root proc)

    ### sample first x coords
    arrT1_temp = variable.evaluate_global(np.vstack([xpos1_temp, arrY_temp]).T)
    ### sample second x coords
    arrT2_temp = variable.evaluate_global(np.vstack([xpos2_temp, arrY_temp]).T)

    ### save the data on root proc
    if uw.mpi.rank == 0:
    ### numpy version
        np.save(outfile0 +str(timestep), np.vstack([xpos1_temp, arrY_temp, arrT1_temp[:,0]]).T) ### save the first temp profile coords + temperautre data

        np.save(outfile1 + str(timestep), np.vstack([xpos1_temp, arrY_temp, arrT1_temp[:,0]]).T) ### save the second temp profile coords + temperature data

    ### h5 version
        with h5py.File(outfile0+str(timestep)+'.h5', 'w') as hf:
            hf.create_dataset("data",  data=np.vstack([xpos1_temp, arrY_temp, arrT1_temp[:,0]]).T) ### save the first temp profile coords + temperautre data

        with h5py.File(outfile1+str(timestep)+'.h5', 'w') as hf:
            hf.create_dataset("data",  data=np.vstack([xpos2_temp, arrY_temp, arrT2_temp[:,0]]).T) ### save the first temp profile coords + temperautre data

sampleExample.ipynb.txt

bknight1 commented 1 year ago

The last way was a little complicated, I've simplified it in the notebook attached.

sampleExample-v2.ipynb.txt

coopergeodynamics commented 1 year ago

thanks, Ben - I'll give it a try this week and report back.

On Sun, Jan 22, 2023 at 3:00 PM Ben Knight @.***> wrote:

The last way was a little complicated, I've simplified it in the notebook attached.

sampleExample.ipynb.txt https://github.com/underworldcode/underworld2/files/10475703/sampleExample.ipynb.txt

— Reply to this email directly, view it on GitHub https://github.com/underworldcode/underworld2/issues/635#issuecomment-1399632737, or unsubscribe https://github.com/notifications/unsubscribe-auth/ABBZWRC7R3A4ZO6XBGYE5MTWTW3YXANCNFSM6AAAAAARXRQE74 . You are receiving this because you were mentioned.Message ID: @.***>

coopergeodynamics commented 1 year ago

guess what????? IT WORKS! the "uw.mpi.comm.allreduce(x_coord_tracer, op=mpi4py.MPI.SUM)" bit worked like a charm! I'm so excited. Thank you, Ben, for all of your help and your persistence. I really appreciate it!

On Mon, Jan 23, 2023 at 6:03 PM Katie Cooper @.***> wrote:

thanks, Ben - I'll give it a try this week and report back.

On Sun, Jan 22, 2023 at 3:00 PM Ben Knight @.***> wrote:

The last way was a little complicated, I've simplified it in the notebook attached.

sampleExample.ipynb.txt https://github.com/underworldcode/underworld2/files/10475703/sampleExample.ipynb.txt

— Reply to this email directly, view it on GitHub https://github.com/underworldcode/underworld2/issues/635#issuecomment-1399632737, or unsubscribe https://github.com/notifications/unsubscribe-auth/ABBZWRC7R3A4ZO6XBGYE5MTWTW3YXANCNFSM6AAAAAARXRQE74 . You are receiving this because you were mentioned.Message ID: @.***>

coopergeodynamics commented 1 year ago

also...really clever and simple approach to fix it. kudos there.

On Mon, Jan 23, 2023 at 6:51 PM Katie Cooper @.***> wrote:

guess what????? IT WORKS! the "uw.mpi.comm.allreduce(x_coord_tracer, op=mpi4py.MPI.SUM)" bit worked like a charm! I'm so excited. Thank you, Ben, for all of your help and your persistence. I really appreciate it!

On Mon, Jan 23, 2023 at 6:03 PM Katie Cooper @.***> wrote:

thanks, Ben - I'll give it a try this week and report back.

On Sun, Jan 22, 2023 at 3:00 PM Ben Knight @.***> wrote:

The last way was a little complicated, I've simplified it in the notebook attached.

sampleExample.ipynb.txt https://github.com/underworldcode/underworld2/files/10475703/sampleExample.ipynb.txt

— Reply to this email directly, view it on GitHub https://github.com/underworldcode/underworld2/issues/635#issuecomment-1399632737, or unsubscribe https://github.com/notifications/unsubscribe-auth/ABBZWRC7R3A4ZO6XBGYE5MTWTW3YXANCNFSM6AAAAAARXRQE74 . You are receiving this because you were mentioned.Message ID: @.***>