nagataka / Read-a-Paper

Survey
6 stars 1 forks source link

Learning Invariant Representations for Reinforcement Learning without Reconstruction #36

Open KarlXing opened 3 years ago

KarlXing commented 3 years ago

Summary

Link

Learning Invariant Representations for Reinforcement Learning without Reconstruction

Author/Institution

Amy Zhang, Rowan McAllister, Roberto Calandra, Yarin Gal, Sergey Levine UCB, FAIR, McGill, Oxford

What is this

Comparison with previous researches. What are the novelties/good points?

Compared with other representation learning approaches such as reconstruction-based or contrastive learning based approaches.

Key points

Bisimulation metric

How the author proved effectiveness of the proposal?

  1. Use MuJuCo to show their approach leads to higher rewards or faster convergence
  2. Use CARLA to show the generalization advantage of their approach

Any discussions?

It's actually related to our work Domain Adaptation In Reinforcement Learning Via Latent Unified State Representation . Our approach could be categorized into reconstruction-based state representation learning approaches. I agree representation learning could matter a lot for RL. Bisimulation is also interesting and have a good potential for further research. It could have more utilization in RL.
Their idea is simple but interesting. Their section 5 of proof could be a good plus. Their experiments are also valid.

What should I read next?

Learning continuous latent space models for representation learning Scalable methods for computing state similarity in deterministic Markov decision processes.

nagataka commented 3 years ago