Open Huixxi opened 5 years ago
Generally, the logits are simply unnormalized log probabilities.
Mathematically, you would want to work with probability distributions in most cases, but numerically it often makes more sense to work directly with the logits - not only because you do fewer ops, but also because it can be more computationally efficient under the hood (e.g. random sampling from generic categorical distribution is usually implemented with log of probabilities anyway).
Specifically, indeed tf.random.categorical
works directly with logits. You can read the python source code here, though it eventually traces back to internal bindings.
Yes, thanks for your reply, I tried to track their source code from the same link you provided. But I failed at line 392. So I still can't get how they deal with the raw logits
. Now I just assume that tf.random.categorical
has the convert logits to probability
inner operation.
Another question, I think there is no need to x = tf.convert_to_tensor(inputs, dtype=tf.float32)
, I've checked that the input is already a tensor
(but I don't know how it does), so I comments it and the code works well.
Thanks your great work, I was reading your amazing blog recently. Maybe a stupid issue, I don't really understand the
logits
means in your code. I only know that it is the raw output of the lastDense
layer. But how can it be put into thetf.random.categorical(logits, 1)
directly without any preprocessing? I mean it at least should be pass into asoftmax
layer to convert it to a probability distribution right? Or thesoftmax
is an inner operation oftf.random.categorical(logits, 1)
and so that we can pass thelogits
directly into that function to pick an action based on its probability? I tried to track its source code but failed. Another question, what thelogits
means,q-values
of eachaction
or just theirprobability
? As far as I know that layer should be apolicy-based
operation, so ... I think there is nothing to do withq-value
orvalue-function
.