Closed MarcoMeter closed 2 weeks ago
The latest updates on your projects. Learn more about Vercel for Git ↗︎
Name | Status | Preview | Comments | Updated (UTC) |
---|---|---|---|---|
cleanrl | ✅ Ready (Inspect) | Visit Preview | 💬 Add feedback | Sep 18, 2024 4:49am |
pre-commit fails because of two "obsolet" imports: memory_gym and PoMEnv. Without those imports, the environments are not registered inside gymnasium.
I added a script to load a trained model and then watch an episode.
These environments require memory and converge pretty fast. That's why I included those initially. MemoryGym environments take in more time and resources (especially GPU memory due to the cached hidden states of Transformer-XL).
I still have to run the benchmarks and write documentation. Besides that, the single file implementation is basically done. I tried to stay close to ppo_atari_lstm.py
Hey! This looks pretty impressive! Just curious, what is the state of this PR?
Hi @roger-creus the benchmarks just completed. So the next step is to prepare the reports and then to write the docs.
Nice! Looking forward to the results
It reproduces the results of my paper: https://arxiv.org/abs/2309.17207
and this is the original implementation: https://github.com/MarcoMeter/neroRL
I'm curious about how it performs in other environments (e.g. atari?)
IMHO, here are the remaining TODOs of this PR:
blocks
to layers (e.g. trxl_num_layers
or TransformerLayer(nn.Module)
)#noqa
cleanrl/ppo_trxl/pom_env.py
)?
@roger-creus I don't have results on Atari.
Keep or remove the Proof of Memory environment (cleanrl/ppo_trxl/pom_env.py)?
Feel free to keep it.
Do you know why the wandb chart looks like this?
Do you know why the wandb chart looks like this?
What are you referring to? This is how I created the report:
@echo off
python -m openrlbenchmark.rlops ^
--filters "?we=openrlbenchmark&wpn=cleanRL&ceik=env_id&cen=exp_name&metric=episode/r_mean" ^
"ppo_trxl?cl=PPO-TrXL" ^
--env-ids MortarMayhem-Grid-v0 MortarMayhem-v0 Endless-MortarMayhem-v0 MysteryPath-Grid-v0 MysteryPath-v0 Endless-MysteryPath-v0 SearingSpotlights-v0 Endless-SearingSpotlights-v0 ^
--no-check-empty-runs ^
--pc.ncols 3 ^
--pc.ncols-legend 3 ^
--rliable ^
--rc.score_normalization_method maxmin ^
--rc.normalized_score_threshold 1.0 ^
--rc.sample_efficiency_plots ^
--rc.sample_efficiency_and_walltime_efficiency_method Median ^
--rc.performance_profile_plots ^
--rc.aggregate_metrics_plots ^
--rc.sample_efficiency_num_bootstrap_reps 10 ^
--rc.performance_profile_num_bootstrap_reps 10 ^
--rc.interval_estimates_num_bootstrap_reps 10 ^
--output-filename memgym/compare ^
--scan-history ^
--report
Thanks for your feedback =)
Oh I meant the error bar (shadow region) is very large for some reason, but it’s fine. I have added you to the list of contributors. Feel free to merge after CI passes.
It seems that other reports have this as well, like: https://wandb.ai/openrlbenchmark/cleanrl/reports/CleanRL-PPG-vs-PPO-results--VmlldzoyMDY2NzQ5
I did some refinements:
My last step before merging is to make sure that poetry and the dependencies blend well.
My last step before merging is to make sure that poetry and the dependencies blend well.
Done.
Description
Implementation of PPO with Transformer-XL as episodic memory. Based on this repo and paper.
Types of changes
Checklist:
pre-commit run --all-files
passes (required).mkdocs serve
.If you need to run benchmark experiments for a performance-impacting changes:
--capture_video
.python -m openrlbenchmark.rlops
.python -m openrlbenchmark.rlops
utility to the documentation.python -m openrlbenchmark.rlops ....your_args... --report
, to the documentation.