pauloalves86 / go_dino

AI written in Python for Google Chrome's Dinosaur game
GNU General Public License v3.0
24 stars 7 forks source link

Game doesn't continue playing after first generation #3

Open marianpavel opened 6 years ago

marianpavel commented 6 years ago

Hi, I've found a few more problems.

  1. When you run the program you need to jump 1-2 times at the start to recognize and take control.
  2. After a generation is saved, the game doesn't reset and you need to handle manually the reset and do step 1.
  3. I am a little confused about this:

with open('winner.pkl', 'wb') as f: pickle.dump(winner, f)

There is no winner.pkl file, files saved from neat are named like the below screenshot.

P.S. There was a problem of the reset_game method, ctr-r doesn't work on mac ( it doesn't do anything, however, I've been able to solve this problem by adding a space )

    @staticmethod
    def reset_game():
        pyautogui.hotkey('ctrl', 'r')
        pyautogui.press('space')
        time.sleep(4.)
screen shot 2018-06-16 at 07 37 43
marianpavel commented 6 years ago

My bad, winner.pkl is saved after the N inside winner = pop.run(eval_fitness, 100) but I do not see any progression of learning. I will try to look for the current problems

marianpavel commented 6 years ago

Ok, I have looked for problems and I found few things, don't know if there are real problems.

On the line: return dict(monitor, height=lh, left=pt[0], top=pt[1] - lh + h, width=lw)

The pt[1] - lh + h I think is incorrect, having the IDE on the right half of the screen and the chrome on the left half of the screen, transforming the image with: Image.fromarray(image).show() in the while loop is white. Changed with some constants and I found that 170 works great in my case.

screen shot 2018-06-17 at 11 26 43
pauloalves86 commented 6 years ago

This is a new problem but the root cause is the same, high DPI (retina display). To temporarily help you, replace _templates/dinolandscape.png with the following image, it may solve your problem.

dino_landscape2
marianpavel commented 6 years ago

What is the maximum score you could achieve? I got like 2000-2200 points. Also, the play_winner script it needs to be adjusted with the new Board class. I already did it in my local code.

I will let the program to run for N = 100 and after that, I am just curious if the winner file contains already trained the neural network.

marianpavel commented 6 years ago

@pauloalves86 I have a curiosity, on this sequence of code:

            if distance == last_distance or distance == 0:
                res = cv2.matchTemplate(image, self.gameover_template, cv2.TM_CCOEFF_NORMED)
                print(np.max(res))
                if np.max(res) > 0.5:
                    return score

Can you post an image of the result when the game is over ? My np.max(res) doesn't get above 0.5 and I think it's because of my hard coded image

pauloalves86 commented 6 years ago

What is the maximum score you could achieve?

There is no maximum, because the game is endless.

I got like 2000-2200 points

That is great! Could you share it? This is a well trained neural network and because of latency, it's hard to go further. One of my bests NN achieved 5000-8000. I'm studying PyQt to embed the game and improve performance, training faster and probably allowing the NN break records.

I have a curiosity, on this sequence of code

if distance == last_distance or distance == 0 means the game seems stopped, you might be dead distance == last_distance means you hit the front of the obstacle distance == 0 means you hit the back of the obstacle (ex.: you jumped too early and fall on the obstacle) Inside the if condition I check if the _gameovertemplate matches and 0.5 is the threshold

Can you post an image of the result when the game is over ? My np.max(res) doesn't get above 0.5 and I think it's because of my hard coded image

This might be the case, replace the _dinogameover.png with the following image, and tell me if it works

dino_gameover
pauloalves86 commented 6 years ago

I am just curious if the winner file contains already trained the neural network

Yes, but it does not means it's good. It means it's the best NN that was found.

marianpavel commented 6 years ago

Why do you use black & white on the button, does CV2 library work with different colors? Yesterday tried with the following image and it worked perfectly, now I don't know why it's not working anymore. Can you share please some images of the game ? I wanna see some boundaries, the high DPI image above with the game doesn't work but if I see some real boundaries that you use I can try different resolutions/combinations dino_gameover_good

marianpavel commented 6 years ago

Also: https://docs.opencv.org/2.4/modules/imgproc/doc/object_detection.html

templ – Searched template. It must be not greater than the source image and have the same data type.

marianpavel commented 6 years ago

I have successfully tested the repository on a computer and the OpenCV doesn't have any problems. I come now with a new question, is there a way how to make the network learn to duck? The program won't pass on the official google t-rex game because of the middle birds. On other copy of the games the trees are too close and it doesn't know that he can use down button to come faster on the ground. Best regards.

pauloalves86 commented 6 years ago

is there a way how to make the network learn to duck?

Yes, but it is more complex and will take more time training. I have not used this because the middle birds can be jumped, and I had trained a NN that was able to jump them. I'll think a way to add it. You can create an issue for that if you want. You'll have to change this code (trainer.py)

class GetCommand(object):
    def __init__(self, net: nn.FeedForwardNetwork):
        self.net = net

    def __call__(self, distance: int, size: int, speed: int) -> str:
        value = self.net.activate([distance, size, speed])[0]
        if value >= 0.5:
            return 'up'
        return ''

and increase num_outputs = 1 on _trainconfig.txt