lab-cosmo / librascal

A scalable and versatile library to generate representations for atomic-scale learning
https://lab-cosmo.github.io/librascal/
GNU Lesser General Public License v2.1
80 stars 18 forks source link

`pip install` fails with python versions 3.11 and 3.12 #431

Closed tsitsvero closed 10 months ago

tsitsvero commented 10 months ago

pip fails building wheel within environments having latest python versions 3.11 and 3.12.

Command to build/install: pip install git+https://github.com/lab-cosmo/librascal

Python Build
3.8
3.9
3.10
3.11
3.12

error.log

Environments were created by miniforge3.

Luthaf commented 10 months ago

Hello! This is a known issue due to the code using an old version of pybind11.

Most of the development effort in the lab now moved to a different approach and librascal is on minimal maintenance mode. If you really need to use it, I would suggest you stay on Python 3.10.

What are you using librascal for?

Depending on your workflow, we might also be able to help you write something equivalent using librascal replacement (we now have a more modular approach, with the code being split between https://github.com/lab-cosmo/metatensor, https://github.com/Luthaf/rascaline, and https://github.com/lab-cosmo/torch_spex). The main feature still missing is a end to end GAP model training loop, most of the other capabilites of librascal should be available.

tsitsvero commented 10 months ago

Hello!

I would suggest you stay on Python 3.10.

Okay, thanks!

What are you using librascal for? Depending on your workflow, we might also be able to help you write something equivalent using librascal replacement (we now have a more modular approach, with the code being split between https://github.com/lab-cosmo/metatensor, https://github.com/Luthaf/rascaline, and https://github.com/lab-cosmo/torch_spex). The main feature still missing is a end to end GAP model training loop, most of the other capabilites of librascal should be available.

Thanks, very interesting!

I'm actually working on the training loop for GPs on gpus: https://github.com/chem-gp/fande

It's in early stage of development, the project relies on GPyTorch (for fast GP training + NN integration) + Pyro (to make fancy priors for GPs) + Lightning (for scaling to GPU clusters) + wandb (for tracking).

As an application for now I wrote an ASE-based interface to i-PI and some time later have plans to write a module for chemoton.

My current issue with rascal was that it calculates invariants in a serial maner. For a molecular crystal with some thousand of atoms, SOAP calculation could take some seconds which is slow, and this is the bottleneck for MD... :( Have you addressed this issue?

Ideally it would be great to have the invariants being computed at GPU (or even several GPUs in parallel) directly, since now I have to move tensors CPU -> GPU -> CPU

I wonder if this is already implemented in https://github.com/lab-cosmo/torch_spex?

Any other suggestions are greatly appreciated!

Luthaf commented 10 months ago

I'm actually working on the training loop for GPs on gpus: https://github.com/chem-gp/fande

That looks pretty cool! FYI, we want to bring back GP models with the new code as well, using our custom sparse format (metatensor) to be able to train on forces while using a minimal amount of memory. This is still very early, since we are initially more focussed on neural networks models.

My current issue with rascal was that it calculates invariants in a serial maner. For a molecular crystal with some thousand of atoms, SOAP calculation could take some seconds which is slow, and this is the bottleneck for MD... :( Have you addressed this issue?

Ideally it would be great to have the invariants being computed at GPU (or even several GPUs in parallel) directly, since now I have to move tensors CPU -> GPU -> CPU

I wonder if this is already implemented in https://github.com/lab-cosmo/torch_spex?

So rascaline implements the SOAP invariants (SOAP power spectrum) calculation, in parallel on CPU. Last time I ran some benchmarks, the scaling was pretty good, especially during training if you are computing SOAP for multiple structures since we can then parallelize over structures.

For pure GPU calculation, torch_spex would be your current best guess. It does implemement the first step in SOAP invariants calculation (the spherical expansion), but I'm not sure if there is an implementation of the invariant already provided. No idea either about calculations on multiples GPUs. @frostedoyster knows more about this!

As an application for now I wrote an ASE-based interface to i-PI and some time later have plans to write a module for chemoton.

We are currently working on a common API to interface simulation engines and torch-based ML models . The idea here is that model developers would export using this interface and automatically get access to all the corresponding MD engines, and MD engines developers would implement it and get access to all the ML models.

It is part of metatensor, documented here: https://lab-cosmo.github.io/metatensor/latest/atomistic/index.html. There are a couple tutorials being worked on as well, in here: https://github.com/lab-cosmo/metatensor/pull/431. We already have an ASE calulcator based on this API and a prototype LAMMPS integration, and we plan to add support for more MD engines: i-PI, OpenMM, GROMACS, … I'd be happy to have an interface to chemoton as well!

If any of this is interesting for you, please send me an email or open an issue for further discussion on metatensor's repository!

tsitsvero commented 10 months ago

Thanks! It makes sense for me then to slowly start migrating to rascaline (it looks mature enough, and I tried some tests) for my next projects and taking a look at new torch_spex, also checking if tighter integration with metatensor makes sense.

For pure GPU calculation, torch_spex would be your current best guess. It does implemement the first step in SOAP invariants calculation (the spherical expansion), but I'm not sure if there is an implementation of the invariant already provided. No idea either about calculations on multiples GPUs. @frostedoyster knows more about this!

Okay, got it! I found these examples for now: https://github.com/lab-cosmo/torch_spex/blob/master/examples/power_spectrum.py https://github.com/lab-cosmo/torch_spex/blob/master/examples/ps_model.py so there's a point to start digging.

We are currently working on a common API to interface simulation engines and torch-based ML models . The idea here is that model developers would export using this interface and automatically get access to all the corresponding MD engines, and MD engines developers would implement it and get access to all the ML models.

It is part of metatensor, documented here: https://lab-cosmo.github.io/metatensor/latest/atomistic/index.html. There are a couple tutorials being worked on as well, in here: https://github.com/lab-cosmo/metatensor/pull/431. We already have an ASE calulcator based on this API and a prototype LAMMPS integration, and we plan to add support for more MD engines: i-PI, OpenMM, GROMACS, … I'd be happy to have an interface to chemoton as well!

If any of this is interesting for you, please send me an email or open an issue for further discussion on metatensor's repository!

Sure! If I move more towards chemical reactions, I'll be more than glad to contribute, I'll drop an issue or mail you.

I close the current Issue here, and will post questions in the relevant repositories.

Again, thanks for the very relevant directions!