reinforcement-learning-kr / lets-do-irl

Inverse RL algorithms (APP, MaxEnt, GAIL, VAIL)
MIT License
672 stars 109 forks source link

Some questions or bugs? #8

Open densechen opened 4 years ago

densechen commented 4 years ago

Hi, Thanks for you providing so nice code. But when I try to play with "lets-do-irl/mujoco/vail", It make me very confused that the code in train_model.py: vdb_loss = criterion(learner, torch.ones((states.shape[0], 1))) + \ criterion(expert, torch.zeros((demonstrations.shape[0], 1))) + \ beta * bottleneck_loss I think this should be: vdb_loss = criterion(learner, torch.zeros((states.shape[0], 1))) + \ criterion(expert, torch.ones((demonstrations.shape[0], 1))) + \ beta * bottleneck_loss . In other words, the learner should be pushed to zeros, and expert should be pushed to ones, isn't it? Or both is fine?

By the way, the code in the same file: beta = max(0, beta + args.alpha_beta * bottleneck_loss) If beta equals beta + args.alpha_beta * bottleneck_loss, at the next time backward, there will report a bug about beta, which is modified by a inplace operation.

zlpiscoming commented 2 weeks ago

1 or zero does not matter. Because the goal of discriminator is to distinguish the traj of expert and learner, who is 1 does not matter.