sarwart / mapping_SC_FC

9 stars 8 forks source link

Low correlation between pFC and eFC #1

Closed LuRoe7 closed 1 year ago

LuRoe7 commented 1 year ago

I plan to use the Deep Learning Model you published recently in order to predict the subject-specific functional connectome from the corresponding structural connectome. I made the code run on my computer . However, I only achieve very small within-subject correlations between predicted FC matrices and real FC matrices based on the example data you provided (r < 0.05). Now I am wondering if the example data is just random or if there is an error in my code? The only thing I changed is the initializer since the "contrib" module does not exist anymore in tensorflow 2.X: It's tf.keras.initializers.GlorotNormal() instead of tf.contrib.layers.xavier_initializer() now, but should be equivalent.

import tensorflow as tf import numpy as np from scipy.stats import pearsonr import scipy.io mat = scipy.io.loadmat("/PATH/TO/example_data.mat")

Number of connections at input and output

conn_dim = 2278 #(upper-triangle of Connectiivty matrix) layer_dim = 1024

Xavier initializer

initializer = tf.keras.initializers.GlorotNormal() EPS = 1e-12

weights = { 'hidden1': tf.Variable(initializer([conn_dim, layer_dim])), 'hidden2': tf.Variable(initializer([layer_dim, layer_dim])), 'hidden3': tf.Variable(initializer([layer_dim, layer_dim])), 'hidden4': tf.Variable(initializer([layer_dim, layer_dim])), 'hidden5': tf.Variable(initializer([layer_dim, layer_dim])), 'hidden6': tf.Variable(initializer([layer_dim, layer_dim])), 'hidden7': tf.Variable(initializer([layer_dim, layer_dim])), 'pred_out': tf.Variable(initializer([layer_dim, conn_dim])),

} biases = { 'hidden1': tf.Variable(initializer([layer_dim])), 'hidden2': tf.Variable(initializer([layer_dim])), 'hidden3': tf.Variable(initializer([layer_dim])), 'hidden4': tf.Variable(initializer([layer_dim])), 'hidden5': tf.Variable(initializer([layer_dim])), 'hidden6': tf.Variable(initializer([layer_dim])), 'hidden7': tf.Variable(initializer([layer_dim])), 'pred_out': tf.Variable(initializer([conn_dim])), }

def predictor(x,a):

hidden_layer1 = tf.matmul(x, weights['hidden1'])
hidden_layer1 = tf.add(hidden_layer1, biases['hidden1'])
hidden_layer1 = tf.nn.dropout(hidden_layer1, a)
hidden_layer1 = tf.nn.leaky_relu(hidden_layer1, 0.2)

hidden_layer2 = tf.matmul(hidden_layer1, weights['hidden2'])
hidden_layer2 = tf.add(hidden_layer2, biases['hidden2'])
hidden_layer2 = tf.nn.dropout(hidden_layer2, a)
hidden_layer2 = tf.nn.tanh(hidden_layer2)

hidden_layer3 = tf.matmul(hidden_layer2, weights['hidden3'])
hidden_layer3 = tf.add(hidden_layer3, biases['hidden3'])
hidden_layer3 = tf.nn.dropout(hidden_layer3, a)
hidden_layer3 = tf.nn.leaky_relu(hidden_layer3, 0.2)

hidden_layer4 = tf.matmul(hidden_layer3, weights['hidden4'])
hidden_layer4 = tf.add(hidden_layer4, biases['hidden4'])
hidden_layer4 = tf.nn.dropout(hidden_layer4, a)
hidden_layer4 = tf.nn.tanh(hidden_layer4)

hidden_layer5 = tf.matmul(hidden_layer4, weights['hidden5'])
hidden_layer5 = tf.add(hidden_layer5, biases['hidden5'])
hidden_layer5 = tf.nn.dropout(hidden_layer5, a)
hidden_layer5 = tf.nn.leaky_relu(hidden_layer5, 0.2)

hidden_layer6 = tf.matmul(hidden_layer5, weights['hidden6'])
hidden_layer6 = tf.add(hidden_layer6, biases['hidden6'])
hidden_layer6 = tf.nn.dropout(hidden_layer6, a)
hidden_layer6 = tf.nn.tanh(hidden_layer6)

hidden_layer7 = tf.matmul(hidden_layer6, weights['hidden7'])
hidden_layer7 = tf.add(hidden_layer7, biases['hidden7'])
hidden_layer7 = tf.nn.dropout(hidden_layer7, a)
hidden_layer7 = tf.nn.leaky_relu(hidden_layer7, 0.2)

out_layer = tf.matmul(hidden_layer7, weights['pred_out'])
out_layer = tf.add(out_layer, biases['pred_out'])
out_layer = tf.nn.tanh(out_layer)

return out_layer

load subject-specific SC matrices

mat_sc = mat["sc"].astype('float32')

predict subject-specific FC

mat_fc_pred = predictor(x=mat_sc,a=0.5)

convert predicted FC data to array

mat_fc_pred = np.array(mat_fc_pred).astype('float64')

extract empirical FC data

mat_fc = mat["fc"].astype('float64')

from scipy.stats import pearsonr

compute Pearson correlation

for x,y in zip(mat_fc_pred,mat_fc): r,p= pearsonr(x,y) print(r)

sarwart commented 1 year ago

Hi LuRoe7,

The provided data is a dummy data (not genuine subjects) which is why you won't be able to replicate the results reported in the paper. HCP dataset was used in the paper, so you can access the subjects directly from HCP website (https://www.humanconnectome.org/) to generate the connectivity matrices.

Tabinda

LuRoe7 commented 1 year ago

Hi Tabinda,

okay, that explains everything! :)

I've another question: How exactly did you implement the regularization term (comparable inter-subject correlation between pFC and eFC) in your code? I'm actually trying to train a neural network to predict the SC matrix based on different types of FC matrices (different metrics of FC) and I'd like to build on your very innovative work. Best Lukas

sarwart commented 1 year ago

Hi Tabinda,

okay, that explains everything! :)

I've another question: How exactly did you implement the regularization term (comparable inter-subject correlation between pFC and eFC) in your code? I'm actually trying to train a neural network to predict the SC matrix based on different types of FC matrices (different metrics of FC) and I'd like to build on your very innovative work. Best Lukas

Hi Lukas,

The loss function can be found in train.py reg = compute_corr_loss(fc_gen,batch_size) #regularization parameter loss = tf.losses.mean_squared_error(fc_output,fc_generated) + reg_constant * tf.abs(reg - reg_param)

Tabinda