computationalmodelling / fidimag

Finite DIfference microMAGnetic code, based on Python, Cython and C
http://computationalmodelling.github.io/fidimag/
Other
60 stars 24 forks source link

set the number of cores used in the simulation. #157

Closed ghost closed 3 years ago

ghost commented 3 years ago

Hi,

How can I set the number of cores that Fidimag uses to run the simulation?

Thanks

davidcortesortuno commented 3 years ago

Hi, in the terminal where you run the simulations, or before running a Jupyter notebook, you can set the openmp system variable to control the number of threads by running

export OMP_NUM_THREADS=5

for example.

ghost commented 3 years ago

Thank you. Is it normal that to create the geometry, Fidimag takes about 1 hour for a system of around 1e6 cells ?. I have tried to load the geometry through the Ms_profile function and through an external file. In both cases, it takes a similar time.

rpep commented 3 years ago

It will depend on your machine somewhat, but generally no. Can you paste the code? I should say that setting Ms/etc. is not done in parallel - only the simulation execution is. If you are adding the demagnetising field then this does add some time to the setup, but 1 hour sounds way too high.

davidcortesortuno commented 3 years ago

Maybe you can pickle the Mesh object and save it (using the dill library which works better than the pickle library)

import dill
from fidimag.common import CuboidMesh

mesh = CuboidMesh(dx=1, dy=1, dz=1, nx=20, ny=20, nz=20)

with open('my_mesh.pkl', 'wb') as F:
    dill.dump(mesh, F)
...

Then you should be able to load this mesh (might take some time):

with open('my_mesh.pkl', 'rb') as F:
    mesh = dill.load(F)

sim = Sim(mesh)
...

But, as Ryan said, your mesh seems huge so the demag (if your are using it) might take long time to build the matrices

rpep commented 3 years ago

@davidcortesortuno I don't think it's normal - 10e6 is equivalent to 1000 x 1000 x 100 which is certainly quite large, but I'm surprised it's over 1 hour.

ghost commented 3 years ago

sorry, it was 1e6 cells. I think it takes a long time on this instruction:

self.mesh.neighbours [self.mesh.neighbours == i] = -1

I have tried to understand what it does by looking at the sim.py and cuboid_mesh.py files, but basically, I think it replaces all elements of self.mesh.neighbors with -1.

rpep commented 3 years ago

What exactly are you trying to do? Simulate only a disk?

from what I can see:

Ms = 3.78e5
r0 = 30

mesh = CuboidMesh(nx=200, ny=200, nz=40, 
                                  dx=1, dy=1, dz=1, 
                                  x0=-r0/2, y0=-r0/2, z0=-r0/2,
                                  unit_length=1e-9)

def Ms_function(pos):
    x, y, z = pos
    r = (R-math.sqrt((x)**2+(y)**2))**2+z*z
    if r*r < r0*r0/4:
        return Ms
    else:
        return 0

should be sufficient.

The neighbours argument is not really for use in the micromagnetic side. It is used only to make/stop certain cells from interacting with each other, and is primarily for atomistic simulations where you can have higher order exchange with non-neighbouring cells. We share CuboidMesh between the micromagnetic and atomistic codes which is why you can still see this argument option, but I wouldn't advise utilising it here - just set Ms to zero, as it won't decrease the calculation time if demag is used anyway.

ghost commented 3 years ago

Hi, I am trying to build a semi-torus (with a specific thickness). In the definition that you showed, the angular and the inner radius constraint are missing. When redefining the Ms_function, Fidimag continues to take the same time to load the simulation data.

Perhaps in Fidimag it would be good to disable the neighbors' argument when you want to do a micromagnetic simulation. Since the loops are written in python and not in c. I have redefined that line and now Fidimag takes less than 1 minute to load the simulation. I think everything works fine.

Thank so much

rpep commented 3 years ago

In general, this is designed for usability rather than performance, since most simulations take a long time relative to the initialisation cost (which is generally very low - have never seen this issue before). It is possible, though, to set Ms (and other parameters) by passing an array of the appropriate length like:

sim.Ms = array

You can also write a faster setter function using Cython similar to the example here: https://github.com/computationalmodelling/fidimag/tree/master/fidimag/user

Alternatively you can use Numba to do the same sort of thing from within Python.

In terms of the way we order the array, if your mesh has Nx Ny Nz cells, you can get the array index for a particular cell like:

 # for scalar parameter like Ms, A
id = k*Nx*Ny + j*Nx + i
 # for vector parameters like H, Ku
id_x = 3 * id
id_y = 3 * id + 1
id_z = 3 * id + 2

There is a refactoring going on which moves more code into C/C++ but it's a big work in progress.