openai / baselines

OpenAI Baselines: high-quality implementations of reinforcement learning algorithms
MIT License
15.84k stars 4.88k forks source link

In baselines/common/distributions.py, CategoricalPd.Sample seems have a bug. #1219

Open morenfang opened 1 year ago

morenfang commented 1 year ago

I found that when calling the CategoricalPd.Sample(), the sampling results are very biased. After inspection, it is found that self.logits should be tf.log(self.logits). According to this page: https://en.wikipedia.org/wiki/Categorical_distribution

def sample(self):
     u = tf.random_uniform(tf.shape(self.logits), dtype=self.logits.dtype)
     return tf.argmax(self.logits - tf.log(-tf.log(u)), axis=-1)

def sample(self):
     u = tf.random_uniform(tf.shape(self.logits), dtype=self.logits.dtype)
     return tf.argmax(tf.log(self.logits) - tf.log(-tf.log(u)), axis=-1)

I also did experiments to verify this result. After adding tf.log, the sampling data conforms to the given distribution.