Closed YihanLi126 closed 1 year ago
Hello!
The code was structured to allow for extensions of other environments based on CommonRLInterface. The reward for each environment are extensions of the reward function defined in CommonRLInterface.
Code location for the reward function for different environments:
From Line 174 that you linked, that is how we are calculating Line 9 in the pseudocode. We are just incrementally adding the costs across all time steps. reward(env)
is the state-dependent cost at each time step and returns the terminal cost at the final time step. So it represents φ(X) + c(X)
from the pseudocode. Line 167 is where the control costs are being calculated which is the third term in Line 9 from the pseudocode in the paper.
I hope this helps clear up any confusion.
Let me know if you still have any questions about this.
Hello!
The code was structured to allow for extensions of other environments based on CommonRLInterface. The reward for each environment are extensions of the reward function defined in CommonRLInterface.
Code location for the reward function for different environments:
- CartPole: I did not redefine the reward function and used the one implemented in ReinforcementLearning.jl here
- CarRacing: https://github.com/sisl/MPOPIS/blob/ceff32bdc81cfb4b00e2115d40447549c174647f/src/envs/car_racing.jl#L203
- MultiCarRacing: https://github.com/sisl/MPOPIS/blob/ceff32bdc81cfb4b00e2115d40447549c174647f/src/envs/multi-car_racing.jl#L138
- MuJoCo: It is defined using the default reward from the Python implementation. and is defined here https://github.com/sisl/MPOPIS/blob/ceff32bdc81cfb4b00e2115d40447549c174647f/src/envs/envpool_env.jl#L105
From Line 174 that you linked, that is how we are calculating Line 9 in the pseudocode. We are just incrementally adding the costs across all time steps.
reward(env)
is the state-dependent cost at each time step and returns the terminal cost at the final time step. So it representsφ(X) + c(X)
from the pseudocode. Line 167 is where the control costs are being calculated which is the third term in Line 9 from the pseudocode in the paper.I hope this helps clear up any confusion.
Yes, the explanation is clear for me, and thank you for your reply! I'll let you know if I have any questions in my following work.
Hello!
I have a small question about the dimension of the reward:
https://github.com/sisl/MPOPIS/blob/ceff32bdc81cfb4b00e2115d40447549c174647f/src/envs/car_racing.jl#L203
I'm a little bit confused about the I/O of the state of the environment. It seems that the function here uses env.state[4:5]
for calculating the cost of the velocity. Is it just for a single state, or a series of states?(That is to say, what is the data structure of the state of the environment?) I have this question because both trajectory_cost
and control_costs
here are array with K elements:
https://github.com/sisl/MPOPIS/blob/ceff32bdc81cfb4b00e2115d40447549c174647f/src/mppi_mpopi_policies.jl#L174
if the reward function calculates the cost of a series of state costs, where can I find function that apply the system dynamics to get the the state related to the input? Is it like the step function here:
https://github.com/sisl/MPOPIS/blob/ceff32bdc81cfb4b00e2115d40447549c174647f/src/envs/car_racing.jl#L284
And what are the purposes of the two cost function here for two kinds of environments: https://github.com/sisl/MPOPIS/blob/ceff32bdc81cfb4b00e2115d40447549c174647f/src/mppi_mpopi_policies.jl#L148 https://github.com/sisl/MPOPIS/blob/ceff32bdc81cfb4b00e2115d40447549c174647f/src/mppi_mpopi_policies.jl#L186 Can I get some information about differences between them which served for different purposes?
Thank you for your patience!
Sorry for the delay in response.
For the CarRacingEnv
, the state space is the same as the observation space defined here. The reward function you linked (line 203) takes a single environment and returns a single reward. Since the velocity components are the 4th and 5th entries of the state vector, it uses env.state[4:5]
for the velocity.
In the calculate_trajectory_costs
function, since we haveK
samples we create K
different environments and simulate them across T
time steps (here). The control cost is a scalar value in line 204 as it is from sample k
and time step t
.
After stepping through all the K
samples for the T
time steps, we have a trajectory cost for each sample which is of size K
.
So the reward function returns the reward based on the environment at a given state (contained within the environment struct). The calculate_trajectory_costs
function calculates the cost of each K
samples across the time horizon by calling the reward function at each time step for each sample (and combining it with the control cost).
The function defined here https://github.com/sisl/MPOPIS/blob/ceff32bdc81cfb4b00e2115d40447549c174647f/src/mppi_mpopi_policies.jl#L186
Is the main function for calculating the trajectory costs for each sample for most environments that are a subtype of AbstractEnv
. The function defined here https://github.com/sisl/MPOPIS/blob/ceff32bdc81cfb4b00e2115d40447549c174647f/src/mppi_mpopi_policies.jl#L148
is the same function, but for use with the EnvpoolEnv
environment. This function is for MuJoCo environments and uses EnvPool to help with running numerous MuJoCo simulations at once. In this function, reward(env)
returns a vector of size K
which is the number of environments. So we only need to loop over the T
time steps in this function.
I hope this helps clear up some of the confusion. Let me know if you have any more questions.
Hello, I have a question about the reward() function here: https://github.com/sisl/MPOPIS/blob/ceff32bdc81cfb4b00e2115d40447549c174647f/src/mppi_mpopi_policies.jl#L174 I can only find that it's in CommonRLInterface, is there any implementation of reward() in this MPPI code base? And how is the reward() function here related to the pseudocode in the paper? Thank you!