assume-framework / assume

ASSUME - Agent-based Simulation for Studying and Understanding Market Evolution
https://assume.readthedocs.io
22 stars 5 forks source link

Dokumentation Updates #273

Closed kim-mskw closed 7 months ago

kim-mskw commented 7 months ago
codecov[bot] commented 7 months ago

Codecov Report

Attention: 39 lines in your changes are missing coverage. Please review.

Comparison is base (3328c38) 81.90% compared to head (cad0911) 81.83%.

Files Patch % Lines
assume/reinforcement_learning/algorithms/matd3.py 67.90% 26 Missing :warning:
assume/reinforcement_learning/learning_role.py 35.29% 11 Missing :warning:
...einforcement_learning/algorithms/base_algorithm.py 60.00% 2 Missing :warning:
Additional details and impacted files ```diff @@ Coverage Diff @@ ## main #273 +/- ## ========================================== - Coverage 81.90% 81.83% -0.08% ========================================== Files 38 38 Lines 4162 4173 +11 ========================================== + Hits 3409 3415 +6 - Misses 753 758 +5 ``` | [Flag](https://app.codecov.io/gh/assume-framework/assume/pull/273/flags?src=pr&el=flags&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=assume-framework) | Coverage Δ | | |---|---|---| | [pytest](https://app.codecov.io/gh/assume-framework/assume/pull/273/flags?src=pr&el=flag&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=assume-framework) | `81.83% <64.22%> (-0.08%)` | :arrow_down: | Flags with carried forward coverage won't be shown. [Click here](https://docs.codecov.io/docs/carryforward-flags?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=assume-framework#carryforward-flags-in-the-pull-request-comment) to find out more.

:umbrella: View full report in Codecov by Sentry.
:loudspeaker: Have feedback on the report? Share it here.

maurerle commented 7 months ago

I still get AttributeError: 'RLStrategy' object has no attribute 'actor' when running: assume -s example_02a -db sqlite:///./examples/local_db/test_mini_rl.db Though I fixed some things which seem to have not looked right, merged main and made the tests pass :)

maurerle commented 7 months ago

current main does also fail on load_actor_params - seems that this issue exists longer..

There is world.learning_role.compare_and_save_policies() which somehow does not what seems to be intended (it does not save policies and lacks the retrieval of the compared policies and has that weird self.rl_eval_rewards which does not do anything. I think it would be good to go through this and fix loading and comparing scenarios.

Then self.actor.optimizer.load_state_dict(params["actor_optimizer"]) says AttributeError: 'Actor' object has no attribute 'optimizer' which has to be checked too

maurerle commented 7 months ago

Turns out that compare_and_save_policies is a weird method which is never used but should be fixed somehow. Instead of using this method of the learning_role - the correct function to use instead of learning_role.save_params would have been learning_role.rl_algorithm.save_params.

Now, everything works again :blush:

kim-mskw commented 7 months ago

Turns out that compare_and_save_policies is a weird method which is never used but should be fixed somehow. Instead of using this method of the learning_role - the correct function to use instead of learning_role.save_params would have been learning_role.rl_algorithm.save_params.

Now, everything works again 😊

Actually, it does not. If continue_learning=True we should read the actor and critic params, which is currently not done. This is what I tried to fix for the redundant config variable load_model_path. I am on it though.