TianhongDai / hindsight-experience-replay

This is the pytorch implementation of Hindsight Experience Replay (HER) - Experiment on all fetch robotic environments.
MIT License
402 stars 75 forks source link

Why Push is not performing as expected #26

Closed root221 closed 2 years ago

root221 commented 2 years ago

Hi,

Thank you for sharing the code. I've tried to run the code as suggested in readme.

mpirun -np 8 python -u train.py --env-name='FetchPush-v1' 2>&1 | tee push.log

But the success rate is much lower compared to the plot in readme. I got a success rate of about 0.2 after running 50 epochs. Do you have any idea why this might happen?

TianhongDai commented 2 years ago

@root221 Hi - that's strange..., Could you please provide your system's information. What I guess is the MPI doesn't work, and it only use single worker to conduct the training. Could you please check how many MPI workers are really in used during training? The most easiest way is to add print function in the launch function as follows:

def launch(args):
    # create the ddpg_agent
    env = gym.make(args.env_name)
    # set random seeds for reproduce
    env.seed(args.seed + MPI.COMM_WORLD.Get_rank())
    random.seed(args.seed + MPI.COMM_WORLD.Get_rank())
    np.random.seed(args.seed + MPI.COMM_WORLD.Get_rank())
    torch.manual_seed(args.seed + MPI.COMM_WORLD.Get_rank())
    # **please add this**
    print(MPI.COMM_WORLD.Get_rank())
    if args.cuda:
        torch.cuda.manual_seed(args.seed + MPI.COMM_WORLD.Get_rank())
    # get the environment parameters
    env_params = get_env_params(env)
    # create the ddpg agent to interact with the environment 
    ddpg_trainer = ddpg_agent(args, env, env_params)
    ddpg_trainer.learn()
root221 commented 2 years ago

Hi,

I have figured out why. It's my bad, I run the code with --n-cycles=10.