mafeiyang / ACTINN

GNU General Public License v3.0
35 stars 21 forks source link

TypeError: unsupported operand type(s) for -: 'NoneType' and 'int' #3

Open danshu opened 5 years ago

danshu commented 5 years ago

Hi,

I want to use ACTINN to annotate my mouse 10x data, but failed with the following error:

Cell Types in training set: {'Granulocyte': 0, 'Cardiac Muscle': 1, 'Hepatocyte': 2, 'Stromal cell': 3, 'Epidermis': 4, 'B cell': 5, 'Epithelial cell': 6, 'Endothelial cell': 7, 'Erythrocyte': 8, 'T cell': 9, 'Monocyte': 10, 'NK cell': 11}

Trainng cells: 56112

Cost after epoch 5: 0.170630 Cost after epoch 10: 0.093435 Cost after epoch 15: 0.077716 Cost after epoch 20: 0.072095 Cost after epoch 25: 0.068932 Cost after epoch 30: 0.066996 Cost after epoch 35: 0.065552 Cost after epoch 40: 0.064285 Cost after epoch 45: 0.063467 Cost after epoch 50: 0.062842 Parameters have been trained! Train Accuracy: 0.9999822 Traceback (most recent call last): File "~/ACTINN/actinn_predict.py", line 328, in test_predict = pd.DataFrame(predict_probability(test_set, parameters)) File "~/ACTINN/actinn_predict.py", line 240, in predict_probability p = tf.nn.softmax(z4, axis=0) File "~/anaconda3/envs/py3.6/lib/python3.6/site-packages/tensorflow/python/util/deprecation.py", line 507, in new_func return func(*args, **kwargs) File "~/anaconda3/envs/py3.6/lib/python3.6/site-packages/tensorflow/python/ops/nn_ops.py", line 2903, in softmax return _softmax(logits, gen_nn_ops.softmax, axis, name) File "~/anaconda3/envs/py3.6/lib/python3.6/site-packages/tensorflow/python/ops/nn_ops.py", line 2833, in _softmax is_last_dim = (dim == -1) or (dim == shape.ndims - 1) TypeError: unsupported operand type(s) for -: 'NoneType' and 'int'

Best, Danshu

eoindosullivan commented 5 years ago

Hi Danshu/Mafeiyang - Im in the exact same position with the same errors - did you work out how to resolve this?

EDIT: The softmax function on line 240 seemed to be causing the error, and based off https://www.tensorflow.org/api_docs/python/tf/nn/softmax, This function performs the equivalent of softmax = tf.exp(logits) / tf.reduce_sum(tf.exp(logits), axis).

So I replaced line 240 with p = tf.exp(z4) / tf.reduce_sum(tf.exp(z4), 0) and have completed the script and am just checking results now. Hope this helps.

danshu commented 5 years ago

@eoindosullivan Thanks very much for your information. I will try your solution now!

danshu commented 5 years ago

@eoindosullivan This fix works perfectly! Thanks for your help!