deepmodeling / deepmd-kit

A deep learning package for many-body potential energy representation and molecular dynamics
https://docs.deepmodeling.com/projects/deepmd/
GNU Lesser General Public License v3.0
1.51k stars 516 forks source link

[Feature Request] Loading DPs as TF models and performing external backprop #2594

Open siddarthachar opened 1 year ago

siddarthachar commented 1 year ago

Summary

I wanted to ask about a technical aspect of DeePMD. Is there a way by which I can load DPs as TF models and perform backprop externally, like on a jupyter notebook. I am able to use DPs in Jupyter notebooks for other things but this. I have a project in mind that makes uses of loss function gradients w.r.t atomic and for that I would want to perform backprop along the DP. Using DPs to perform these tasks will be the best for us since we’ve had great success with DPs so far.

Detailed Description

This is a toy code that I wrote (not the actual project):

import numpy as np
from deepmd import (
    DeepPotential,

dp=DeepPotential('nh3.pb')
coord = np.array([[1,0,0], [0,0,1.5], [1,0,3]]).reshape([1, -1])
cell = np.diag(10 * np.ones(3)).reshape([1, -1])
atype = [1,0,1]
e, f, v = dp.eval(coord, cell, atype)

tf.config.run_functions_eagerly(True)

# Initialize input and output tensors
inputs = tf.constant([[0.2, 0.3, 0.4]])
coord = np.array([[1,0,0], [0,0,1.5], [1,0,3]]).reshape([1, -1])
cell = np.diag(10 * np.ones(3)).reshape([1, -1])
atype = [1,0,1]
targets = tf.constant([[19.0]])

# Initialize weights and biases as tensors
weights = tf.Variable(tf.random.normal([3, 1]))
biases = tf.Variable(tf.zeros([1]))

# Define the forward pass
def forward_pass(inputs):
    return tf.matmul(inputs, weights) + biases

# Define the loss function
def loss_function(predictions, targets):
    e, f, v = dp.eval(coord, cell, atype)
#     print(e)
    return tf.reduce_mean(tf.square(e + predictions - targets))

# Define the optimizer
optimizer = tf.optimizers.SGD(learning_rate=0.1)

# Perform backpropagation
# inputs = [coord, cell, atype]
for epoch in range(10):
    with tf.GradientTape() as tape:
        # Forward pass
        predictions = forward_pass(inputs)
#         print(predictions, targets)

        # Compute the loss
        loss = loss_function(predictions, targets)
    # Compute the gradients
    gradients = tape.gradient(loss, [weights, biases])
    # Update the weights and biases
    optimizer.apply_gradients(zip(gradients, [weights, biases]))

    print(f"Epoch {epoch+1}: Loss = {loss}")

# Make predictions on new data
new_inputs = tf.constant([[0.1, 0.2, 0.3]])

new_predictions = forward_pass(inputs)
print("Predictions:")
print(new_predictions)

This is something that I wrote, but the back prop does not happen. I am assuming that the “dp” does not allow for gradient calculations via tf. Do you know what should be done? I am trying to write an application that uses DPs to perform adversarial attack based active learning. It would be great if there’s a way to backprop through the model in a simple way, if you knew one.

Any help with this will be great!

Thanks!

Further Information, Files, and Links

No response

siddarthachar commented 1 year ago

Another followup issue: The above mentioned task requires me to load tensorflow (within deepmd's conda environment) and tools from deepmd directly. However, there seems to be an overlap of tensorflow versions when both these are imported. For a sample script, consider this example:

### Import packages
# from deepmd.calculator import DP
import tensorflow as tf

### Initialize input and output tensors
inputs = tf.constant([[0.2, 0.3, 0.4]])
targets = tf.constant([[0.5]])

### Initialize weights and biases as tensors
weights = tf.Variable(tf.random.normal([3, 1]))
biases = tf.Variable(tf.zeros([1]))

### Define the forward pass
def forward_pass(inputs):
    return tf.matmul(inputs, weights) + biases

### Define the loss function
def loss_function(predictions, targets):
    return tf.reduce_mean(tf.square(predictions - targets))

### Define the optimizer
optimizer = tf.optimizers.SGD(learning_rate=0.1)

### Perform backpropagation
for epoch in range(10):
    with tf.GradientTape() as tape:
        # Forward pass
        predictions = forward_pass(inputs)
        # Compute the loss
        loss = loss_function(predictions, targets)
    # Compute the gradients
    gradients = tape.gradient(loss, [weights, biases])
    # Update the weights and biases
    optimizer.apply_gradients(zip(gradients, [weights, biases]))

    print(f"Epoch {epoch+1}: Loss = {loss}")

### Make predictions on new data
new_inputs = tf.constant([[0.1, 0.2, 0.3]])
new_predictions = forward_pass(new_inputs)
print("Predictions:")
print(new_predictions)

If I run the code as it is, withfrom deepmd.calculator import DP commented out, then I get an output that one expects:

Epoch 1: Loss = 0.05436952784657478
Epoch 2: Loss = 0.029933901503682137
Epoch 3: Loss = 0.01648052968084812
Epoch 4: Loss = 0.009073586203157902
Epoch 5: Loss = 0.0049955896101891994
Epoch 6: Loss = 0.002750389976426959
Epoch 7: Loss = 0.0015142642660066485
Epoch 8: Loss = 0.0008336998289451003
Epoch 9: Loss = 0.00045900672557763755
Epoch 10: Loss = 0.0002527119650039822
Predictions:
tf.Tensor([[0.39016175]], shape=(1, 1), dtype=float32)

However, once I uncomment this from deepmd.calculator import DP line, this is what I get:

Epoch 1: Loss = Tensor("Mean:0", shape=(), dtype=float32)
Epoch 2: Loss = Tensor("Mean_1:0", shape=(), dtype=float32)
Epoch 3: Loss = Tensor("Mean_2:0", shape=(), dtype=float32)
Epoch 4: Loss = Tensor("Mean_3:0", shape=(), dtype=float32)
Epoch 5: Loss = Tensor("Mean_4:0", shape=(), dtype=float32)
Epoch 6: Loss = Tensor("Mean_5:0", shape=(), dtype=float32)
Epoch 7: Loss = Tensor("Mean_6:0", shape=(), dtype=float32)
Epoch 8: Loss = Tensor("Mean_7:0", shape=(), dtype=float32)
Epoch 9: Loss = Tensor("Mean_8:0", shape=(), dtype=float32)
Epoch 10: Loss = Tensor("Mean_9:0", shape=(), dtype=float32)
Predictions:
Tensor("add_10:0", shape=(1, 1), dtype=float32)

And I have tried several different things to make my code to print the values of the my tensors, and nothing seemed to work. E.g.:tf.config.run_functions_eagerly(True). I feel this is important because, I need to be able to load DPs in order to use the trained models to perform backprop through them.

Do you know if there is a fix to this? Is there a workaround?

Thanks again!

njzjz commented 1 year ago

DeePMD-kit is based on TensorFlow v1 API and relies on the TF v1 behaviors. TF v2 behaviors are disabled. You try to mix the TF v1 code and the TF v2 code, which is not in design. I don't know whether it will work.

https://github.com/deepmodeling/deepmd-kit/blob/a3c8980b4d0ad4560605d2c36f77681864b52d2b/deepmd/env.py#L72

In DeepPotential, eval only returns NumPy arrays. The energy tensor is dp.t_energy.

siddarthachar commented 1 year ago

Thanks Jinzhe! I managed to fix the printing issue. All I had to do is to import tf v1 and use it throughout.

import tensorflow as tf

# Function for printing - has issues where it updates the tensor each time it is called. 
def printv1(x):
    with tf.compat.v1.Session() as sess:
        sess.run(tf.compat.v1.global_variables_initializer())
        print(sess.run(x))

tf.compat.v1.disable_eager_execution()
tf.compat.v1.reset_default_graph()

# Initialize input and output tensors
inputs = tf.constant([[0.2, 0.3, 0.4]])
targets = tf.constant([[0.5]])

# Initialize weights and biases as tensors
weights = tf.Variable(tf.random.normal([3, 1]))
biases = tf.Variable(tf.zeros([1]))

# Define the forward pass
def forward_pass(inputs):
    return tf.matmul(inputs, weights) + biases

# Define the loss function
def loss_function(predictions, targets):
    return tf.reduce_mean(tf.square(predictions - targets))

# Define the optimizer
optimizer = tf.compat.v1.train.GradientDescentOptimizer(learning_rate=0.1)

# Perform backpropagation
for epoch in range(10):
    with tf.GradientTape() as tape:
        # Forward pass
        predictions = forward_pass(inputs)
        # Compute the loss
        loss = loss_function(predictions, targets)
    # Compute the gradients
    gradients = tape.gradient(loss, [weights, biases])
    # Update the weights and biases
    optimizer.apply_gradients(zip(gradients, [weights, biases]))

    with tf.compat.v1.Session() as sess:
        sess.run(tf.compat.v1.global_variables_initializer())
        print(f"Epoch {epoch+1}: Loss = {sess.run(loss)}")

# Make predictions on new data
new_inputs = tf.constant([[0.1, 0.2, 0.3]])
new_predictions = forward_pass(new_inputs)
with tf.compat.v1.Session() as sess:
    sess.run(tf.compat.v1.global_variables_initializer())
    print("Predictions:")
    print(sess.run(new_predictions))

And this seems to be working!


Regarding using dp.t_energy, I can't figure out how to use it to get energy tensors and use it to perform further calculations. Say I want to define a loss function loss = energy**2 + sum(forces) and use it to perform backprop. And I want the values of energy and forces to be calculated using the DeepPotential (dp) object. I could not find a way to get the tensor form of energy and forces without performing dp.eval. How can I use dp.t_energy and dp.t_forces to achieve this? It is of type: <tf.Tensor 'load/o_energy:0' shape=(?,) dtype=float64>, which I think is just a general tensor representation of energy. Ideally, something like loss = dp.t_energy**2 + sum(dp.t_forces) is what I wanted. Do you know what I can do for this? I want to compute d(loss)/dx, where x is the atomic coordinates. This would mean x also needs to be a tensor with a computational graph.

Thanks for the help!

njzjz commented 1 year ago

dp.t_energy is the energy tensor. I suggest you read the source code for detailed implementation.

The gradient to dx might not be implemented. If you find this error, you can report it here.