libAtoms / workflow

python workflow toolkit
GNU General Public License v2.0
32 stars 18 forks source link

autoparallelization for GPU job. #278

Open jungsdao opened 9 months ago

jungsdao commented 9 months ago

I have one question regarding autoparallelization using GPU. I wanted to run two minima hopping jobs in parallel which use GPU respectively. Is it possible using autoparallelize function in wfl? I'm not sure it's properly using GPU and seems to be not as fast as I expected from GPU job.

bernstei commented 9 months ago

Note that I'm assuming you're talking about parallelization on a single node with python subprocesses. If that's not true, you should clarify.

wfl autoparallelization currently doesn't know about anything about anything. Single node parallelization just uses python's subprocess.pool to run separate python subprocesses and divides the work among them. I agree that dealing nicely with multiple GPUs sounds useful, but I'm not sure exactly how to do it. If you were to run multiple python processes on a multi-GPU node manually, how would you make sure they're each using a different GPU?