ansys / pymotorcad

https://motorcad.docs.pyansys.com
MIT License
15 stars 1 forks source link

Parallel instances of MotorCAD and runtime reduction #234

Open AndyYoon opened 11 months ago

AndyYoon commented 11 months ago

🔍 Before submitting the issue

🐞 Description of the bug

Background: I am attempting to generate 100's of random designs on MotorCAD and evaluating each one. I'm utilizing the 'multiprocessing' package in Python to vary the number of processes.

Issue: I do get about 1/2 reduction in evaluation time when num_process = 2. However, it doesn't seem like there's added benefit to running higher num_process. I was hoping to utilize all (or most) of the cores on my machine (e.g. 16) to reduce the evaluation time to (potentially) 1/16. Are these parallel instances sharing resources and essentially throttling the speed down?

📝 Steps to reproduce

RunMcad contains the pymotorcad run script

from multiprocessing import Pool

def loop_item(par_input):
    motor=BPM_toothed_surro(
        Jcu                                 = par_input[0],
        Motor_Length                        = par_input[1],
        Ratio_Bore                          = par_input[2],
        Ratio_SlotDepth_ParallelTooth       = par_input[3],
        Ratio_SlotOpening_ParallelTooth     = par_input[4],
        Magnet_Thickness                    = par_input[5],
        )
    motor.RunMcad(i=par_input[6], workingFolder=Folder,filename="trial"+str(int(par_input[6])).zfill(4)+".mot")

if __name__ == "__main__":
    starttimee = time.time()

    # Define the number of samples and dimensions
    num_samples = 160
    num_dimensions = 6 

    # Define parameter ranges for scaling
    parameter_ranges = [(5, 20), (40, 100), (0.6, 0.75), (0.65, 0.85), (0.35, 0.55), (1, 3.5)]

    # Generate Latin Hypercube and Scale the parameters
    latin_hypercube = generate_latin_hypercube(num_samples, num_dimensions)
    scaled_parameters = scale_parameters(latin_hypercube, parameter_ranges)

    # Including indices to keep track of different models to appease my anxiety
    index = np.linspace(1,num_samples,num_samples, dtype = int)
    par_input = np.append(scaled_parameters.T, [index], axis=0)
    par_input = par_input.T

    num_processes = 8
    with Pool(num_processes) as pool:
        pool.map(loop_item, par_input)

    runtime = (time.time()-starttimee)
    print(runtime)

💻 Which operating system are you using?

Windows

📀 Which ANSYS version are you using?

MotorCAD 2023_2_2

🐍 Which Python version are you using?

3.10

📦 Installed packages

ansys-motorcad-core==0.3.0
jgsdavies commented 10 months ago

Hi @AndyYoon. Parallel Motor-CAD instances will have diminishing returns at a certain point but 2 seems low. Just a few questions:

AndyYoon commented 10 months ago

Hi @jgsdavies!

jgsdavies commented 9 months ago

Hi @jgsdavies!

  • I'm only building a EM/Thermal model, then utilizing "do_magnetic_thermal_calculation()".
  • I do create a new file for every run, since I haven't figured out a way to keep the motorCAD file open between the for loops. Even without calling "quit()" at the end of a for loop, it seems like motorCAD closes once each loop is complete.

Creating a new file for every run is best practice to conflicts.

Restarting Motor-CAD is also currently best practice. If you want to try reusing the same instances here's some info:

The instance of Motor-CAD belongs to the Python object i.e. when the Python object leaves memory then the Motor-CAD instance will close. You can get around this by making Python object global or there's an experimental option which grabs any free Motor-CAD/launches a new one if it can't be found.

MotorCAD(reuse_parallel_instances=True)

# Free instance so another thread can reuse
mc.set_free()
  • Motor type is BPM

Motor-CAD is not optimised for parallel computing. Improving this is something that we are putting a lot of development effort into currently. 2 instances seems low but it's tricky to diagnose this issue without the specific machine, file and script - I can't see anything obviously wrong with your method.

Definitely give this a go with 24R1 when it's released. This has some performance improvements. We're also hoping to provide a cloud based solution in a future release.