matheusportela / Multiagent-RL

Multiagent reinforcement learning simulation framework - Undergraduate thesis in Mechatronics Engineering at the University of Brasília
66 stars 16 forks source link

Provide agent base class, agnostic to Pac-Man simulator #61

Closed matheusportela closed 7 years ago

matheusportela commented 7 years ago

Currently, ClientAgent is the most basic agent class, but it's simply too coupled with the Pac-Man simulator. For instance, it inherits from BerkeleyGameAgent, implements the getAction method, create state message, among others.

Let's make a base class that is agnostic to Pac-Man simulator, such as the BaseLearningAlgorithm class. We can draw ideas from PyBrain, RL Glue, OpenAI Gym, and Maja on how to create an agent class that can be reused with different scenarios.

matheusportela commented 7 years ago

Things a good BaseAgent class must have:

matheusportela commented 7 years ago

I just find out three modules that are a first attempt of decoupling agents from the Pac-Man simulator:

matheusportela commented 7 years ago

As implemented in #67, now we have three classes: BaseAgent, BaseController and BaseExperiment, but they aren't really useful yet because all code is still implement in Pac-Man specific concrete classes. Now, we need to refactor out as much code as possible so they become reusable in other environments.

matheusportela commented 7 years ago

Now that we have BaseAdapterAgent, we may move most of the logic in BerkeleyAdapter to Experiment considering that most experiment will require a basic setup:

matheusportela commented 7 years ago

Moved most reusable logic from experiments at acf899297a1d7ad9221d68bb0a3b199e75e6427a. I believe most refactoring from the adapter, in order to create a base agent class, has been done. I'll close this issue and open a new one specific to refactoring the controller.