trypag / NonAdjLoss

28 stars 1 forks source link

how to add NonAdjLoss #5

Open tanjia123456 opened 3 years ago

tanjia123456 commented 3 years ago

Hello, computing the nonadjloss must afetr the prediction. But, I use the tensorflow use the placeholder : `with tf.Graph().as_default(): with tf.name_scope('input'): ftr_in = tf.placeholder(dtype=tf.float32, shape=(batch_size, nb_nodes, ft_size)) bias_in = tf.sparse_placeholder(dtype=tf.float32) lbl_in = tf.placeholder(dtype=tf.int32, shape=(batch_size, nb_nodes, nb_classes)) msk_in = tf.placeholder(dtype=tf.int32, shape=(batch_size, nb_nodes)) attn_drop = tf.placeholder(dtype=tf.float32, shape=()) ffd_drop = tf.placeholder(dtype=tf.float32, shape=()) is_train = tf.placeholder(dtype=tf.bool, shape=())

logits = model.inference(ftr_in, nb_classes, nb_nodes, is_train,
                         attn_drop, ffd_drop,
                         bias_mat=bias_in,
                         hid_units=hid_units, n_heads=n_heads,
                         residual=residual, activation=nonlinearity)
log_resh = tf.reshape(logits, [-1, nb_classes])
lab_resh = tf.reshape(lbl_in, [-1, nb_classes])
msk_resh = tf.reshape(msk_in, [-1])
loss = model.masked_softmax_cross_entropy(log_resh, lab_resh, msk_resh)`

I must instantiate the log_resh, but the log_resh is a placeholder. So, do you know how to solve it? I also tried add a Nonadjloss ,but the loss value is increasing accoring the epoch.