sidhomj / DeepTCR

Deep Learning Methods for Parsing T-Cell Receptor Sequencing (TCRSeq) Data
https://sidhomj.github.io/DeepTCR/
MIT License
113 stars 40 forks source link

Tensor shape mismatch while running "2 - Supervised Repertoire Classification" Tutorial #29

Closed marrojwala closed 4 years ago

marrojwala commented 4 years ago

I am running the tutorial as is. When I am training the model on the data, there seems to be a tensor shape mismatch.

---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
<ipython-input-5-685fbc9e79fc> in <module>()
----> 1 DTCR_WF.Train()

/home/ubuntu/.local/lib/python3.6/site-packages/DeepTCR/DeepTCR.py in Train(self, kernel, num_concepts, trainable_embedding, embedding_dim_aa, embedding_dim_genes, embedding_dim_hla, num_fc_layers, units_fc, weight_by_class, class_weights, use_only_seq, use_only_gene, use_only_hla, size_of_net, graph_seed, qualitative_agg, quantitative_agg, num_agg_layers, units_agg, drop_out_rate, multisample_dropout, multisample_dropout_rate, multisample_dropout_num_masks, batch_size, batch_size_update, epochs_min, stop_criterion, stop_criterion_window, accuracy_min, train_loss_min, hinge_loss_t, convergence, learning_rate, suppress_output, loss_criteria, batch_seed)
   5026               accuracy_min,train_loss_min,hinge_loss_t,convergence,learning_rate, suppress_output,
   5027                     loss_criteria)
-> 5028         self._train(write=True,batch_seed=batch_seed,iteration=0)
   5029 
   5030     def Monte_Carlo_CrossVal(self,folds=5,test_size=0.25,LOO=None,combine_train_valid=False,random_perm=False,seeds=None,

/home/ubuntu/.local/lib/python3.6/site-packages/DeepTCR/DeepTCR.py in _train(self, write, batch_seed, iteration)
   4747                 train_loss, train_accuracy, train_predicted,train_auc = \
   4748                     Run_Graph_WF(self.train,sess,self,GO,batch_size,batch_size_update,random=True,train=True,
-> 4749                                  drop_out_rate=drop_out_rate,multisample_dropout_rate=multisample_dropout_rate)
   4750 
   4751                 train_accuracy_total.append(train_accuracy)

/home/ubuntu/.local/lib/python3.6/site-packages/DeepTCR/functions/utils_s.py in Run_Graph_WF(set, sess, self, GO, batch_size, batch_size_update, random, train, drop_out_rate, multisample_dropout_rate)
    719         elif train:
    720             loss_i, accuracy_i, _, predicted_i = sess.run([GO.loss, GO.accuracy, GO.opt, GO.predicted],
--> 721                                                           feed_dict=feed_dict)
    722         else:
    723             loss_i, accuracy_i, predicted_i = sess.run([GO.loss, GO.accuracy, GO.predicted],

/home/ubuntu/anaconda3/envs/deeptcr/lib/python3.6/site-packages/tensorflow_gpu-1.15.2-py3.6-linux-x86_64.egg/tensorflow_core/python/client/session.py in run(self, fetches, feed_dict, options, run_metadata)
    954     try:
    955       result = self._run(None, fetches, feed_dict, options_ptr,
--> 956                          run_metadata_ptr)
    957       if run_metadata:
    958         proto_data = tf_session.TF_GetBuffer(run_metadata_ptr)

/home/ubuntu/anaconda3/envs/deeptcr/lib/python3.6/site-packages/tensorflow_gpu-1.15.2-py3.6-linux-x86_64.egg/tensorflow_core/python/client/session.py in _run(self, handle, fetches, feed_dict, options, run_metadata)
   1154                 'Cannot feed value of shape %r for Tensor %r, '
   1155                 'which has shape %r' %
-> 1156                 (np_val.shape, subfeed_t.name, str(subfeed_t.get_shape())))
   1157           if not self.graph.is_feedable(subfeed_t):
   1158             raise ValueError('Tensor %s may not be fed.' % subfeed_t)

ValueError: Cannot feed value of shape (16, 1) for Tensor 'Placeholder_2:0', which has shape '(?, 4)'

I am running it on centos with NVIDIA GPU. All the other tutorials seem to be working well.

sidhomj commented 4 years ago

working fixing this now. I'm pretty sure I know the issue.

sidhomj commented 4 years ago

this should be fixed. Tutorial should run fine now. Thanks for the heads up!