Farama-Foundation / Gymnasium

An API standard for single-agent reinforcement learning environments, with popular reference environments and related utilities (formerly Gym)
https://gymnasium.farama.org
MIT License
6.93k stars 770 forks source link

Update docstring of normalize reward #1136

Closed ffelten closed 1 month ago

ffelten commented 1 month ago

Description

As discussed in https://discord.com/channels/961771112864313344/1243134869559578675/1270855568336359435 I updated the docstring to reflect the fact that it does not normalize.

Fixes # (issue)

Type of change

Please delete options that are not relevant.

Screenshots

Please attach before and after screenshots of the change if applicable.

Checklist:

ffelten commented 1 month ago

Could you update the top sentence and the note to include a discussion that it is normalizing the discounted future rewards for an episode.

Not sure to understand what you want in the end haha

Would you be interested in adding a feature that the reward is actually normalized?

How can you normalize without knowing the reward bounds?

pseudo-rnd-thoughts commented 1 month ago

The critical feature of NormalizeReward is that it doesn't normalize rewards to have a mean of 0 and std of 1 but that the discounted rewards has a mean of 0 and std of 1. Could the note certain explain this and docstring only that it normalizes the sum of discounted rewards

How can you normalize without knowing the reward bounds?

Like the observation normalization, you use a running mean and standard deviation to normalize. So this evolves over time as different environment rewards are visited but does the expect case of normalize the individual rewards not cumulative rewards.