Uses screen captures, OCR, and Reinforcement Learning to optimize training on a specific map in Maplestory
Following demo is completely run via AI.
The projects functions by first taking a screenshot of the maplestory screen. It then crops out the exp and health locations of the screen. The reward is comprised of amount of exp gained - amount of health lost
. There are 3 multi discrete actions the AI can take and any one time using.
left, right, up down
basic attack, power attack (lucky seven)
Pick up item, Jump
At first, the model was done training after about 300 steps. It was just walking into a wall and attaching constantly. Researched a bit and lowered the learning rate, adjusted the gamma, and raised the batch size. This produced decent results.
Creates a customer environment using stable baselines3. Utilzes the PPO
modules with CnnPolicy
as it's good for pixel-based input.
Clone the repo
git clone https://github.com/GrahamMThomas/MapleAITrainer.git
Create virtualEnv and install requirements
python -m venv venv
./venv/scripts/Activate
pip install -r requirements.txt
Run Training
python train.py
check_env.py
- Checks env for method signatures and input outputs are validtest_env.py
- Runs random commands on your environment to ensure it worksrun_latest.py
- Loads the latest_model and runs the environment using it to drive commands w/o trainingLaunch Tensorboard
tensorboard --logdir .\logs\
Had to backtrack around 15k steps as training was invalid
See the open issues for a full list of proposed features (and known issues).