Closed brandondutra closed 7 years ago
Output names are local tensor names (after removing the index and the scope) ... so the friendly output name of x/y/foo:0
is foo
. This avoids requirement of alias names as a new concept that intrinsically TensorFlow doesn't have. So you should be able to produce the same dictionary by naming tensors what you might have used as alias names in the older samples.
FeedForwardClassification does need more docs - on the signature of the model it produces -- the inputs and outputs.
It would make sense to call the outputs label, score, label_N, score_N for classification, and we should be able to make change in Datalab.
ah, I understand your friendly name trick.
goal: have the following output tensor names for classification: predicted: predicted class score: prob of predicted class predicted_2: 2nd most likely class score_2: prob of 2nd class.
I thought I could just change the keys in the output dict of build_output in _ff.py. 1) It is actually required the dict has a 'label' key for eval metrics. This makes sense, but we should use a better mechanism or document in training/_model.py:build_output(). 2) I don't think that dict's score key is consumed. For fun I changed it to 'score_xx' and got no errors.
Notes to self:
So where do friendly output tensor names come from? tl;dr: there are no friendly names Fun story
This works well when only outputting two simple tensors (score and label) but does not work for the following
output_alias_map {u'scores_1': u'output/scores_1:0', u'label_1': u'output/label_1:0', u'TopKV2': u'output/TopKV2:1', u'label': u'output/label:0'}