rlworkgroup / garage

A toolkit for reproducible reinforcement learning research.
MIT License
1.89k stars 311 forks source link

Add MAML Sampler for faster meta-task sampling #1115

Open naeioi opened 4 years ago

naeioi commented 4 years ago

I found that ProMP does sampling much faster than garage. This is because ProMP has a specialized sampler, call MAML Sampler, that parallelizes sampling at task-level. I think this is also important for garage.

A MAML Sampler is a sampler that samples all tasks in one run (i.e. one call to sampler.obtain_samples(). This is contrary to the current design of sampler, which handles a single task once at a time. MAML sampler has control for task-level scheduling, so it allows parallelism at task-level.

Under MAML sampler, the training loop will be similar to something like

sampler = MAMLSampler(tasks)
for many batches
    policies = num_tasks copies of policy
    paths_batch = []
    for some gradient steps
        paths_all_tasks = sampler.obtain_samples(policies)
        update policies using paths_all_tasks
        add paths_all_tasks to paths_batch
    optimize policy using policies and paths_batch

while currently, a MAML training loop has to switch task outside of sampler. Although sampler does parallel sampling at rollout-level, this has a higher overhead than the above MAML sampler.

for many batches
    policies = num_tasks copies of policy
    paths_batch = []
    for tasks[i]
        for some gradient steps
            paths = sampler.obtain_samples(policies[i], tasks[i])
            update policies[i] using paths
            add paths to paths_batch
    optimize policy using policies and paths_batch
ahtsan commented 4 years ago

What if in the vectorized environments, say we have 10 parallel environment, we assign each environment with a different task?

naeioi commented 4 years ago

All environments share a single policy? @ahtsan

ahtsan commented 4 years ago

Yes

naeioi commented 4 years ago

@ahtsan That will be the ideal implemetation of this MAML sampler.