adibyte95 / Mountain_car-OpenAI-GYM

solution to mountain car problem of OpenAI Gym
MIT License
8 stars 4 forks source link

Fundamentally different problem #1

Open lpbrown999 opened 5 years ago

lpbrown999 commented 5 years ago

Is this not a flawed approach since you have fundamentally changed the problem by altering the reward? To compare to other solutions the environment should be the same (including rewards given)

adibyte95 commented 5 years ago

according to the problem statement from https://gym.openai.com/envs/MountainCar-v0/

"A car is on a one-dimensional track, positioned between two "mountains". The goal is to drive up the mountain on the right; however, the car's engine is not strong enough to scale the mountain in a single pass. Therefore, the only way to succeed is to drive back and forth to build up momentum."

so our goal is to make the car cross the flag.... and that is our objective and i can use whatever approach to achieve that objective ...

in my case i have seen that the way i have chosen the rewards give the best result .... you can choose your own reward system to push the car in the correct direction... i don't think that it is wrong as it depends upon the implementor .....

lpbrown999 commented 5 years ago

Modifying the reward creates a new environment that is much easier to learn in. Yes you can do that to achieve the best result, but it is no longer comparable to solutions that use the vanilla environment. This is similar to https://github.com/openai/gym/issues/499#issuecomment-281859285

adibyte95 commented 5 years ago

if i cannot change the reward then how can i improve the results from another person who is using the same algorithm (ex DQN)