google-deepmind / acme

A library of reinforcement learning components and agents
Apache License 2.0
3.52k stars 426 forks source link

Issue with ACME's Distributional RL Implementation #308

Open FernandoRangel666 opened 1 year ago

FernandoRangel666 commented 1 year ago

Hi everyone,

I've been using ACME for some of my projects and ran into an issue in the implementation of distributional reinforcement learning. I thought it might be worth bringing up.

The Issue

While using the losses.categorical function, I noticed it expects both q_tm1 and q_t to be instances of DiscreteValuedDistribution:

def categorical(q_tm1: networks.DiscreteValuedDistribution,
                r_t: tf.Tensor,
                d_t: tf.Tensor,
                q_t: networks.DiscreteValuedDistribution) -> tf.Tensor:

However, in learning.py, these variables are generated as tensors:

q_tm1 = self._critic_network(o_tm1, transitions.action)
q_t = self._target_critic_network(o_t, self._target_policy_network(o_t))

The problem arises when the function tries to access the values and logits attributes of q_tm1 and q_t:

z_t = tf.reshape(r_t, (-1, 1)) + tf.reshape(d_t, (-1, 1)) * q_t.values
p_t = tf.nn.softmax(q_t.logits)

Since q_t and q_tm1 are tensors, not instances of DiscreteValuedDistribution, this leads to an AttributeError.

Quick Questions

Is this intentional, or perhaps an oversight? If it's intentional, what's the reasoning behind it? If it's not, what's the best way to go about fixing it? That's it! Would love to get some insights into this. Thanks!

Best, Miguel Rangel