DeNA / HandyRL

HandyRL is a handy and simple framework based on Python and PyTorch for distributed reinforcement learning that is applicable to your own environments.
MIT License
282 stars 41 forks source link

feature: Specify opponent by path #337

Open sakami0000 opened 1 year ago

sakami0000 commented 1 year ago

src/agent.py

from handyrl.agent import RandomAgent
from handyrl.evaluation import register_agent

@register_agent(alias="transformer")
class TransformerAgent(RandomAgent):
    def __init__(self, temperature=0.0):
        self.temperature = temperature

    def action(self, env, player, show=False):
        action = model_predict(env, player, temperature=self.temperature)
        return action
python main.py -e random:src.agent.TransformerAgent 100 8
python main.py -e random:src.agent.TransformerAgent,temperature=1.0 100 8

NOTE: The entire target value must be enclosed in another quotation marks because quotation marks are erased in python arguments.

python main.py -e random:'src.agent.TransformerAgent,hoge="hoge, fuga"' 100 8

NOTE: The agent class must be defined in the current context.

main.py

import src.agent
python main.py -e random:transformer,temperature=0.5 100 8
opponent: [
    'src.agent.TransformerAgent,temperature=0.5',
    'src.agent.TransformerAgent,temperature=0.5,hoge="hoge, fuga"'
]
opponent: [
    'src.agent.TransformerAgent, temperature=0.5',
    'src.agent.TransformerAgent, temperature=0.5, hoge="hoge, fuga"'
]