Closed sdpkjc closed 1 year ago
The latest updates on your projects. Learn more about Vercel for Git ↗︎
Name | Status | Preview | Comments | Updated (UTC) |
---|---|---|---|---|
cleanrl | ✅ Ready (Inspect) | Visit Preview | 💬 Add feedback | May 6, 2023 9:56pm |
Next -> Add test cases
Can we modify the existing test cases to test them, or create a new test file for them?
def test_dqn_jax():
subprocess.run(
"python cleanrl/dqn_atari_jax.py --save-model True --learning-starts 10 --total-timesteps 16 --buffer-size 10 --batch-size 4",
shell=True,
check=True,
)
The model evaluation dependency environment is related to the algorithm dependency environment. If we create a new test file for the model evaluation, it will multiply the number of test files. I suggest just adding --save-model True
, or something simple way.
The model evaluation dependency environment is related to the algorithm dependency environment. If we create a new test file for the model evaluation, it will multiply the number of test files. I suggest just adding
--save-model True
, or something simple way.
This sounds good to me!
Thanks for your review. 👌🫡
Description
Fixes #380
Types of changes
Checklist:
pre-commit run --all-files
passes (required).mkdocs serve
.If you need to run benchmark experiments for a performance-impacting changes:
--capture-video
.python -m openrlbenchmark.rlops
.python -m openrlbenchmark.rlops
utility to the documentation.python -m openrlbenchmark.rlops ....your_args... --report
, to the documentation.