Closed matheusportela closed 7 years ago
Things a good BaseAgent
class must have:
get_action
, receive_state
, and receive_reward
methodsI just find out three modules that are a first attempt of decoupling agents from the Pac-Man simulator:
pac_agents.py
: Contains a AgentBase
(that should be renamed to BaseAgent
) with init
, setup
, and cleanup
methods, and provide other interfaces, such as BehaviorInterface
, LearningInterface
and QLearningInterface
. Also attempts to implement some agents using these classespac_experiment.py
: Implements PacmanExperiment
based on Experiment
from pac_utils.py
pac_utils.py
: Provide Logger
class (that must be replaced by logging
and Experiment
, an abstraction to a learning experimentAs implemented in #67, now we have three classes: BaseAgent
, BaseController
and BaseExperiment
, but they aren't really useful yet because all code is still implement in Pac-Man specific concrete classes. Now, we need to refactor out as much code as possible so they become reusable in other environments.
Now that we have BaseAdapterAgent
, we may move most of the logic in BerkeleyAdapter
to Experiment
considering that most experiment will require a basic setup:
start_experiment
methodstart_game
methodfinish_game
methodfinish_experiment
methodMoved most reusable logic from experiments at acf899297a1d7ad9221d68bb0a3b199e75e6427a. I believe most refactoring from the adapter, in order to create a base agent class, has been done. I'll close this issue and open a new one specific to refactoring the controller.
Currently,
ClientAgent
is the most basic agent class, but it's simply too coupled with the Pac-Man simulator. For instance, it inherits fromBerkeleyGameAgent
, implements thegetAction
method, create state message, among others.Let's make a base class that is agnostic to Pac-Man simulator, such as the
BaseLearningAlgorithm
class. We can draw ideas from PyBrain, RL Glue, OpenAI Gym, and Maja on how to create an agent class that can be reused with different scenarios.