Closed richardhuo closed 2 years ago
@richardhuo Really appreciate your detailed feedback!! It would help a lot since we are actively synchronizing the codes.
@XiaoYangLiu-FinRL When can you fix these bugs? It's not elegant in experience
Thank you for helping us to point out the problems and find the bugs. I am fixing these bugs. @wu-yang-work @richardhuo
A few days ago, when we merged the code from the branch (isaac gym) to the master, there was some irregularities that led to these bugs
Can I fix these bugs in the follwing way? @shixun404 @supersglzc @zhumingpassional
train/config.py
, this variable can be uniformly changed toget_if_off_policy()
. It is a attribute that returns a bool variable.train/config.py
, this variable can be uniformly changed toagent_class
. This is to distinguish between instance agent
and classes agent_class
in agent = agent_class(..)
train/run.py
, these variables can be uniformly changed toagent_class
. train/run.py
, these variables can be uniformly changed tomax_capacity
. It means the max capacity of experience replay buffer.train/run.py
, The eval_env_func
will be added with a default value. In some tasks, we need to use another simulation environment to evaluate the performance of the intelligence. Just like training on the training set and evaluating on the test set. So we set a variable with default values.2022-07-22 10:43
Point 4: replay_buffer_size
is better than max_capacity
?
Point 7: Add class AgentBaseH
(Hamilton term) in AgentBase.py
, to keep class AgentBase
simple.
2022-07-22 11:47
@Yonv1943 @shixun404 @supersglzc Let's discuss it in the meeting.
I will write to the docs about how to use git in the community, and hope every developer follow the principles.
Developers can follow the steps currently: https://github.com/AI4Finance-Foundation/FinRL/blob/master/docs/source/developer_guide/development_setup.rst
赤日炎炎似火烧。大佬们辛苦啦。
issues were fixed with Reformat. thanks.