Will probably need lookup table model. Actor and critic are tables. Learning via policy iteration and value iteration -- see S&B, p. 74 book.
Scenario: one iteration is processing of the whole state space (gridworld).
Need terminal state. So, optimal policy is a set of arrows for each cell indicating how to optimally reach the target.
Each cell has negative reward, some cells have lower negative rewards than others.
Only target (terminal) state has positive reward.
Need animation like 3wrobot, but left upper corner: gridworld itself, arrows show optimal actions, cells with colors from red to green indicating low and high reward. Numbers in cells = values
No stochastics needed. Just deterministic gridworld. For an example, see frozen lake
Need:
Will probably need lookup table model. Actor and critic are tables. Learning via policy iteration and value iteration -- see S&B, p. 74 book.
Scenario: one iteration is processing of the whole state space (gridworld).
Need terminal state. So, optimal policy is a set of arrows for each cell indicating how to optimally reach the target. Each cell has negative reward, some cells have lower negative rewards than others. Only target (terminal) state has positive reward.
Need animation like 3wrobot, but left upper corner: gridworld itself, arrows show optimal actions, cells with colors from red to green indicating low and high reward. Numbers in cells = values
No stochastics needed. Just deterministic gridworld. For an example, see frozen lake