rll / rllab

rllab is a framework for developing and evaluating reinforcement learning algorithms, fully compatible with OpenAI Gym.
Other
2.89k stars 802 forks source link

when the rllab trpo code is applied to the mountain-car env., it does not climb the mountain well until 500 iterations #216

Closed haanvid closed 4 years ago

haanvid commented 6 years ago

when the rllab trpo code is applied to the mountain-car env., it does not climb the mountain well until 500 iterations.

It is strange since the TRPO algorithm implemented by OpenAI (https://github.com/openai/baselines/tree/master/baselines/trpo_mpi) climbs the mountain well.

haanvid commented 6 years ago

I used the code below to run the TRPO on the mountain-car environment.

from rllab.algos.trpo import TRPO
from rllab.baselines.linear_feature_baseline import LinearFeatureBaseline
from rllab.envs.gym_env import GymEnv
from rllab.envs.normalized_env import normalize
from rllab.misc.instrument import run_experiment_lite
from rllab.policies.categorical_mlp_policy import CategoricalMLPPolicy

def run_task(*_):
    # Please note that different environments with different action spaces may
    # require different policies. For example with a Discrete action space, a
    # CategoricalMLPPolicy works, but for a Box action space may need to use
    # a GaussianMLPPolicy (see the trpo_gym_pendulum.py example)
    env = normalize(GymEnv("MountainCar-v0"))

    policy = CategoricalMLPPolicy(
        env_spec=env.spec,
        # The neural network policy should have two hidden layers, each with 32 hidden units.
        hidden_sizes=(32, 32)
    )

    baseline = LinearFeatureBaseline(env_spec=env.spec)

    algo = TRPO(
        env=env,
        policy=policy,
        baseline=baseline,
        batch_size=4000,
        max_path_length=env.horizon,
        n_itr=500,
        discount=0.99,
        step_size=0.01,
        # Uncomment both lines (this and the plot parameter below) to enable plotting
        # plot=True,
    )
    algo.train()

run_experiment_lite(
    run_task,
    # Number of parallel workers for sampling
    n_parallel=1,
    # Only keep the snapshot parameters for the last iteration
    snapshot_mode="last",
    # Specifies the seed for the experiment. If this is not provided, a random seed
    # will be used
    seed=1,
    # plot=True,
)
haanvid commented 6 years ago

I've uploaded the rendered result here:

https://www.youtube.com/watch?v=Sg5Jt20Jt8Y

ryanjulian commented 6 years ago

Does the OpenAI version use the same policy network, baselines, and hyperparameters as your rllab implementation?

HuangJiaLian commented 4 years ago

Hi, Haanvid. I met the same problem. Have you solved the problem yet? @haanvid

ryanjulian commented 4 years ago

Hi all -- this repository is unmaintained, but the spirit of rllab lives on in the garage project at https://github.com/rlworkgroup/garage.

haanvid commented 4 years ago

@HuangJiaLian After experiencing some issues, I've switched to using stable-baselines. @ryanjulian I guess they are a bit different. But given the fact that TRPO is one of the stable RL algorithms, it should work on toy domains such as mountain-car.