CederGroupHub / chgnet

Pretrained universal neural network potential for charge-informed atomistic modeling https://chgnet.lbl.gov
https://doi.org/10.1038/s42256-023-00716-3
Other
215 stars 55 forks source link

Change default `CHGnet.load(check_cuda_mem: bool)` to `False` #164

Closed janosh closed 1 month ago

janosh commented 1 month ago

there's a problem with cuda_devices_sorted_by_free_mem on slurm clusters

https://github.com/CederGroupHub/chgnet/blob/81439f2731e5077dcca8942f955e3795f1344b2c/chgnet/utils/common_utils.py#L36-L50

it will return whatever GPU has most free memory and so the model tries to use that even if the job was allocated a different GPU. this results in a cryptic CUDA error

    return t.to(device, dtype if t.is_floating_point() else None, non_blocking)
RuntimeError: CUDA error: invalid device ordinal

Process finished with exit code 1

given CHGNet is expected to often be used on queued HPC infra where this error can happen and the error message is not obvious to debug, @BowenD-UCB and I agreed to change the default from True to False