Closed ffelten closed 1 month ago
Could you update the top sentence and the note to include a discussion that it is normalizing the discounted future rewards for an episode.
Not sure to understand what you want in the end haha
Would you be interested in adding a feature that the reward is actually normalized?
How can you normalize without knowing the reward bounds?
The critical feature of NormalizeReward
is that it doesn't normalize rewards to have a mean of 0 and std of 1 but that the discounted rewards has a mean of 0 and std of 1.
Could the note certain explain this and docstring only that it normalizes the sum of discounted rewards
How can you normalize without knowing the reward bounds?
Like the observation normalization, you use a running mean and standard deviation to normalize. So this evolves over time as different environment rewards are visited but does the expect case of normalize the individual rewards not cumulative rewards.
Description
As discussed in https://discord.com/channels/961771112864313344/1243134869559578675/1270855568336359435 I updated the docstring to reflect the fact that it does not normalize.
Fixes # (issue)
Type of change
Please delete options that are not relevant.
Screenshots
Please attach before and after screenshots of the change if applicable.
Checklist:
pre-commit
checks withpre-commit run --all-files
(seeCONTRIBUTING.md
instructions to set it up)