michaelmacisaac / Questions

0 stars 0 forks source link

Kliff descriptors #1

Open michaelmacisaac opened 2 years ago

michaelmacisaac commented 2 years ago

@ipcamit Good morning, I am reaching out because I have been using the Kim Kliff package, and when asking whether additional descriptors were going to be addded, they mentioned you were working on adding a SOAP descriptor? I am wondering if you are still working on adding increased descriptor functionality?

ipcamit commented 2 years ago

Yes. We have just finished first version of KIM driver for ML models, which will take any general torchscript model (with certain inputs, outputs and structure) and run directly on LAMMPS, ASE etc. Now next todo item is to add differentiable descriptors. A proof of concept is already there: https://github.com/ipcamit/colabfit-descriptor-library but it needs lots of refactoring and features. Current timeline being that this should be ready by end of Oct (fingers crossed!). Currently planned descriptors are 1) Symmetry Functions (already there), 2) bispectrum, 3) SOAP, 4) ACE.

michaelmacisaac commented 2 years ago

Great! Thanks for the update! Do you view this as an alternative to Kim Kliff or more added functionality? When you mention "certain inputs and outputs" do the inputs have to be related to a descriptor within your package? When developing models would one need to use the Kim Kliff architecture or could they build a NN using more standard Pytorch practices? Final question, when I have developed Keras models in the past using SOAP descriptor, my model training is much faster, meanwhile with Kim Kliff my model training is fairly slow. When using keras my epoch train time is less than a second, whereas with Kim Kliff the train time is upwards of 7-8 seconds. Have you encountered this?

ipcamit commented 2 years ago

Do you view this as an alternative to Kim Kliff or more added functionality?

What do you mean by KIM Kliff? Currently Kliff works as follows, 1) You can load KIM based models, and retrain their coefficients, 2) you can use simple PyTorch models, which are ported to KIM using Eigen. In new version of KLIFF (not sure when it will be integrated, at present it is a KLIFF fork in heavy development) the 2) mode above will be more full fledged PyTorch model interface that can take any general TorchScript model. This helps in ensuring more flexibility. Does that answer your question?

When you mention "certain inputs and outputs" do the inputs have to be related to a descriptor within your package?

Yes, and more. Although KLIFF will impose no restriction on models to be trained, if you want to run the trained models using KIM API directly then the model needs to expect the inputs and outputs in certain format (eg, first argument being number of atoms, second being coordinates etc). This ensures compatibility with the upcoming model driver. Otherwise for use in LAMMPS etc you would need to write own model driver from scratch.

When developing models would one need to use the Kim Kliff architecture or could they build a NN using more standard Pytorch practices?

You would need more Standard pytorch practices. (But in long run I feel it would be possible to use both, as KLIFF supports KIM models out of box, and eventually every torchscript potential will be a KIM model.) Only Issue would be if you desire KIM compatible models. Then your model needs certain modifications for which kliff will provide tools. E.g. most Graph NN uses minimum image convention for periodic boundaries, while for KIM you need cutoff distance - influence distance framework. So your graph convolution needs slight modifications.

Final question, when I have developed Keras models in the past using SOAP descriptor, my model training is much faster, meanwhile with Kim Kliff my model training is fairly slow. When using keras my epoch train time is less than a second, whereas with Kim Kliff the train time is upwards of 7-8 seconds. Have you encountered this?

Using Dscribe? or have you written own SOAP code in Keras/TF? Our experiance is more mixed. Where when we implemented Stillinger Weber potential in PyTorch and TensorFlow, It was slower in TF. So it was more of "it depends" situation. Benchmark

But I am not sure why it will be order of magnitude slower for kliff. Usually for us both TF and PT go quite close, with TF having an advantage in linear algebra tasks due to XLA i think. You can try PyTorch benchmarking tools to see whats causing the delay.

michaelmacisaac commented 2 years ago

What do you mean by KIM Kliff? Currently Kliff works as follows, 1) You can load KIM based models, and retrain their coefficients, 2) you can use simple PyTorch models, which are ported to KIM using Eigen. In new version of KLIFF (not sure when it will be integrated, at present it is a KLIFF fork in heavy development) the 2) mode above will be more full fledged PyTorch model interface that can take any general TorchScript model. This helps in ensuring more flexibility. Does that answer your question?

My apologies, I meant Kliff. Is there documentation on porting to Kim using Eigen, I'm not familiar with this.

Yes, and more. Although KLIFF will impose no restriction on models to be trained, if you want to run the trained models using KIM API directly then the model needs to expect the inputs and outputs in certain format (eg, first argument being number of atoms, second being coordinates etc). This ensures compatibility with the upcoming model driver. Otherwise for use in LAMMPS etc you would need to write own model driver from scratch.

Thanks for this clarification.

Using Dscribe? or have you written own SOAP code in Keras/TF? Our experiance is more mixed. Where when we implemented Stillinger Weber potential in PyTorch and TensorFlow, It was slower in TF. So it was more of "it depends" situation.

I used Dscribe and developed a NN model using Keras. I feel like I am not using the Kliff package correctly and/or my descriptor is serving as a bottle neck. When I compare training times for shallow networks (1 hidden layer) to deep networks (3 hidden layers), the training times are roughly the same making me think that it is a descriptor issue. Do you have any thoughts on this?

On a different note, you work with the group that is developing a class/function which will produce a single potential for multi-element systems correct?

ipcamit commented 2 years ago

My apologies, I meant Kliff. Is there documentation on porting to Kim using Eigen, I'm not familiar with this.

See one such implementation here (scroll to bottom for source files): https://openkim.org/id/MD_292677547454_000

Regarding the Keras-Dscribe think, I am not sure. May be Dscribe people can explain better. So you can raise an issue there?

On a different note, you work with the group that is developing a class/function which will produce a single potential for multi-element systems correct?

Yes, I am only developing it. And in principle it can work now. I need to add support to save as kim model to kliff, and descriptors; otherwise it more or less works.

michaelmacisaac commented 2 years ago

Thanks for all the help!

michaelmacisaac commented 2 years ago

Hi @ipcamit I am wondering if you know of any resources on how to use multiple kim potentials in a LAMMPS sim. I am wanting to run a sim with a silicon potential and a carbon potential and I have not found any examples for two sim models online. Additionally, what is the advantage of your ML model driver?

ipcamit commented 2 years ago

I am not sure mixing two potentials like this would be good idea, irrespective of being possible. Second, I am not sure it is possible, but I can ask its developers. As far simulating SiC is concerned you can use potentials directly for it from KIM. You can head over to https://openkim.org/browse/models/by-species and Si and C in "Narrow by species" box. You will get all the models what support SiC. Once you have selected the model best for you (if you click on the model name, you will see all the lattice param etc it predicted to guide your selection), install that model on your system by kim-api-collections-management install user <model name> thats it. Now in your lammps file you can easily run the potential model something like

# Initialize KIM Model
kim init  MEAM_LAMMPS_Wagner_2007_SiC__MO_430846853065_001 metal

# Load data and define atom type
read_data test_si.data
kim interactions Si C
mass 1 28.0855
mass 2 12.000

# Create randome velocities and fix thermostat
velocity all create 300.0 4928459 rot yes dist gaussian
fix 1 all nvt temp 300.0 300.0 $(100.0*dt)

timestep 0.001
thermo 10
run          10000

where MEAM_LAMMPS_Wagner_2007_SiC__MO_430846853065_001 is the potential you selected.

The advantage of new ML driver would be, that you can run almost any torchsript model, with minimal efforts. And with it, KIM will also start cataloging ML models.

michaelmacisaac commented 2 years ago

Ah I see, so I cannot run a LAMMPS sim using multiple potentials. Currently Kliff can not produce a single potential for multi element systems, it must train separate potentials for each element. Each of these potentials can be written to a kim model. To clarify thoough, ultimately the separate models I train for Si and C can not be used together for a LAMMPS sim? This may seem redundant but I am just confused if the current implementation of training a model for multiple species is meant for actual MD sims of a multi element system or if it is still being developed and we need to just wait till it can produce a single potential.

Back to your driver, do the current kim drivers not support torch models? if not, is there one I should use for an ML NN model?

Thanks for your continued help

ipcamit commented 2 years ago

Ah I see, so I cannot run a LAMMPS sim using multiple potentials.

I am not the authority on this, but I dont think so. Would confirm once with more experienced users before giving a definitive answer. But I can imagine it being not possible simply because there will not be any cross interaction parameters.

Currently Kliff can not produce a single potential for multi element systems, it must train separate potentials for each element. Each of these potentials can be written to a kim model. To clarify thoough, ultimately the separate models I train for Si and C can not be used together for a LAMMPS sim?

That is because kliff translates simple linear models from torch to eigen. Hence it cannot use more complicated models currently. Now that it supports torch models fully, this limitation will go away. But yes currently you cannot do that.

This may seem redundant but I am just confused if the current implementation of training a model for multiple species is meant for actual MD sims of a multi element system or if it is still being developed and we need to just wait till it can produce a single potential.

You can train a model currently, you just cant use it with KIM. Yet.

Back to your driver, do the current kim drivers not support torch models? if not, is there one I should use for an ML NN model?

No. KIM drivers currently as pure C++ analytic functions. If your model is simple enough, you can use default KLIFF ML interface with the appropriate model drivers, like https://openkim.org/id/DUNN__MD_292677547454_000, or https://openkim.org/id/MD_435082866799_000.

That said the new model driver is nearly ready for first beta release soon. It fully supports graph neural networks. But lack proper descriptor support at the moment (most likely it will get SOAP by end of next week). So if you are willing to put in extra work you can give it a spin!

michaelmacisaac commented 2 years ago

I am not the authority on this, but I dont think so. Would confirm once with more experienced users before giving a definitive answer. But I can imagine it being not possible simply because there will not be any cross interaction parameters.

Understood, I originally was hesitant of this for similar reasons, but at some point I misunderstood it as being possible.

That is because kliff translates simple linear models from torch to eigen. Hence it cannot use more complicated models currently. Now that it supports torch models fully, this limitation will go away. But yes currently you cannot do that.

You can train a model currently, you just cant use it with KIM. Yet.

So we can train multiple species models currently, where a single model can be trained for multiple elements, but we can not use them in an MD sim with Kim yet?

Additionally my current code for training a two species model is this:

''' modelsi=NeuralNetwork(descriptor) modelsi.add_layers(

first hidden layer

        nn.Linear(descriptor.get_size(),N1),
        nn.Tanh(),
        nn.Dropout(drop),
        #second hidden layer
        nn.Linear(N1,N2),
        nn.Tanh(),
        nn.Dropout(drop),
        #third hidden layer
        nn.Linear(N2,N3),
        nn.Tanh(),
        nn.Dropout(drop),
        #Output layer
        nn.Linear(N3,1)
        )

    modelsi.set_save_metadata(prefix=Path('./kliff_saved_models',f"{Model}",f'kliff_saved_modelsi_{fold}'),
                              start=0,frequency=1)

    modelc=NeuralNetwork(descriptor)
    modelc.add_layers(
        #first hidden layer
        nn.Linear(descriptor.get_size(),N1),
        nn.Tanh(),
        nn.Dropout(drop),
        #second hidden layer
        nn.Linear(N1,N2),
        nn.Tanh(),
        nn.Dropout(drop),
        #third hidden layer
        nn.Linear(N2,N3),
        nn.Tanh(),
        nn.Dropout(drop),
        #Output Layer
        nn.Linear(N3,1)
        )

    modelc.set_save_metadata(prefix=Path('./kliff_saved_models',f"{Model}",f'kliff_saved_modelc_{fold}'),
                             start=0,frequency=1)

    # Calculators
    calctrain=CalculatorTorchSeparateSpecies({"Si": modelsi, "C": modelc}, gpu=0) 
    calcval=CalculatorTorchSeparateSpecies({"Si":modelsi,"C":modelc},gpu=0)   
    calctest=CalculatorTorchSeparateSpecies({"Si": modelsi,"C":modelc},gpu=True)

    calctrain.create(xtrain[train],reuse=True,
                 fingerprints_filename=Path(f"{fingerprintroot}",f"fingerprints_train_{fold}.pkl"),
                 fingerprints_mean_stdev_filename=Path(f"{fingerprintroot}","fingerprints_mean_stdev.pkl"),
                 use_forces=False,use_stress=False)
    calcval.create(xtrain[val],reuse=True,
                 fingerprints_filename=Path(f"{fingerprintroot}",f"fingerprints_val_{fold}.pkl"),
                 fingerprints_mean_stdev_filename=Path(f"{fingerprintroot}","fingerprints_mean_stdev.pkl"),
                 use_forces=False,use_stress=False)
    calctest.create(xtest,reuse=True,fingerprints_filename=Path(f"{fingerprintroot}","fingerprints_test.pkl"),
                    fingerprints_mean_stdev_filename=Path(f"{fingerprintroot}","fingerprints_mean_stdev.pkl"),
                    use_forces=False,use_stress=False)
    #loss
    loss = LossNeuralNetworkModel(calctrain,calcval)
    result=loss.minimize(method="Adam", num_epochs=epochs,
                         batch_size=batchsize,
                         lr=learningrate,weight_decay=regcoeff)

'''

Based on the SiC nn example under the examples folder on the kliff repo, I am using two neural networks, one for carbon and one for silicon. With the recent updates, can I train a multielement model (e.g. one for SiC) using only one of the networks? Assuming the answer to the following question is yes: "So we can train multiple species models currently, but we can not use them in an MD sim with Kim yet?"

And then just wait to run it in LAMMPS upon the release of the new ML driver?

ipcamit commented 2 years ago

So we can train multiple species models currently, where a single model can be trained for multiple elements, but we can not use them in an MD sim with Kim yet?

This should be possible I believe. Single model should be possible, as long as it does not have some complicated control flow (like if statements etc).

With the recent updates, can I train a multielement model (e.g. one for SiC) using only one of the networks?

I cant think of a reason why it is not possible even with current KLIFF

And then just wait to run it in LAMMPS upon the release of the new ML driver?

Yes. in the new ML driver you need a single valid torchscript model, where you can combine multiple models using torch.nn.ModuleList module. Which descriptor are you using? If it is Symmetry functions, you can try it even now.

michaelmacisaac commented 2 years ago

I cant think of a reason why it is not possible even with current KLIFF

Understood, I was under the impression we needed a network for each element. Is the current example and my code unnecessarily complex with the use of two networks as compared to one?

Yes. in the new ML driver you need a single valid torchscript model, where you can combine multiple models using torch.nn.ModuleList module. Which descriptor are you using? If it is Symmetry functions, you can try it even now.

Hmm, when/why would we want to combine models? I thought having separate models can't capture cross interactions.

I'm using symmetry functions! Is there a specific repo that I should use for this 'torch.nn.ModuleList' module?

ipcamit commented 2 years ago

Understood, I was under the impression we needed a network for each element. Is the current example and my code unnecessarily complex with the use of two networks as compared to one?

I would need to look in it more to tell for sure, but in theory you can use single network to to predict for both elements. It is just that, in that case you are compromising accuracy of both Si and C, and might need longer training. By splitting your network in two, one for Si, one for C, you can now learn two set of parameters kind of independent to each other, hence get more accurate estimates, faster. So it is not about complexity of model, but trade off between a bigger model and longer training vs shorter models.

Hmm, when/why would we want to combine models? I thought having separate models can't capture cross interactions

You are still using two independent models here, just that for KIM you are bundling them in one function, e.g. instead of giving KIM two functions energy_nn_Si and energy_nn_C, you give one function like:

energy(element):
    if elements == "Si": energy_nn_Si
    if elements == "C": energy_nn_C

This way you have a uniform interface to energy function for KIM. As during simulation, KIM only sees configurations and energy/forces, and pass them to single energy function.

I'm using symmetry functions! Is there a specific repo that I should use for this 'torch.nn.ModuleList' module?

No, it is part of torch. May be I will set up a minimal working example this weekend, and you can have a look.

michaelmacisaac commented 2 years ago

Understood, thank you for the detailed response. So my current understanding is that my approach (using two networks and developing two models) may be better than one, but I should combine these models for use in Kim. However, to combine these I need to use the torch.nn.ModuleList() module. Furthermore I can not use this combined model until the release of the ML model driver. Within the shown combined function, should the individual functions (i.e the one for Si and C) be already written to individual kim models and then the combined model be written to a kim model as well, or should the individual functions be left as .pkl and then the combined function written to a kim model. I feel like it is the former, but I am unsure.

A small working example would be greatly appreciated if you have the time! I currently have some .pkl models and kim models for Si and C and would love to combine them and evaluate their performance in LAMMPS!