haarnoja / softqlearning

Reinforcement Learning with Deep Energy-Based Policies
https://arxiv.org/abs/1702.08165
416 stars 94 forks source link

action distribution for estimating V #8

Open immars opened 6 years ago

immars commented 6 years ago

Hi, First of all, thanks for this inspiring work!

In https://github.com/haarnoja/softqlearning/blob/59c0bbb7d665616f796ab101de65227c89ffd318/softqlearning/algorithms/sql.py#L164

it seems to me that action is sampled from uniform distribution when estimating V_{soft} .

In Sec. 3.2. of your original paper, it is stated that

For q_a we have more options. A
convenient choice is a uniform distribution. However, this
choice can scale poorly to high dimensions. A better choice
is to use the current policy, which produces an unbiased
estimate of the soft value as can be confirmed by substi-
tution.

have you experimented with sampling from current policy to estimate V? Or, how good does uniform distribution do in practice, especially in higher dimensional cases?

thanks,

haarnoja commented 6 years ago

Thanks for your question. We use uniform sampling because there is no direct way to evaluate the log-probabilities of action of SVGD policies, which would be needed for the importance weights. Using some other tractable policy representation could fix this issue.

You're right that uniform samples do not necessarily scale well to higher dimensions. I haven't really studied how accurate the uniform value estimator is, but from my experience, using more samples to estimate the value improves the performance only marginally.

immars commented 6 years ago

ok, i see. Thanks for the reply!

ghost commented 6 years ago

I could be totally misunderstanding, but doesn't appendix C.2 talk about how one can use the sampling network for q_a' and derive the corresponding densities so long as the jacobian of a'/epsilon' is non-singular?

haarnoja commented 6 years ago

I see, that's indeed confusing. You are right in that we could compute the log probs if the sampling network is invertible. My feeling is that, in our case, the network does not remain invertible, and that the log probs we would obtain that way are wrong. We initially experimented with this trick (and that's why we discuss it in the appendix), but in the end, uniform samples worked better. We'll fix this in the next version of the paper, thanks for pointing it out!

ghost commented 6 years ago

My pleasure! Glad I was sort of on the right track. That's very interesting, especially since non-singular weight matrices or choice of activation function are the only thing off the top of my head that might make a feedforward net non-invertible. I might play around with that.

SJTUGuofei commented 6 years ago

Also in "softqlearning/softqlearning/algorithms/sql.py" ys = tf.stop_gradient(self._reward_scale self._rewards_pl + ( 1 - self._terminals_pl) self._discount * next_value) I just wonder is it sufficient that only one sample for computing the Expectation in $\hat Q$. Thanks a lot!

haarnoja commented 6 years ago

Do you mean the expectation over states and actions in Eq. (11)? It is OK, since the corresponding gradient estimator is unbiased, though it can have high variance.

SJTUGuofei commented 6 years ago

I see. Thank you so much!