Closed Martyn0324 closed 2 years ago
In the end, the importation problem indeed seems to be solved by making the complex task of...adding serpent directory to the PATH.
@Martyn0324 You are an absolute god for finding a fix to this window size detection issue. Thank you so much! Hope to see some of you work merged into the main branch repo
Expected result
Run command
serpent play [game_plugin] [game_agent]
and see the AI playingSteps to reproduce
serpent play
commandEncountered result
1. Importation problems:
image
Serpent seems to fail at importing modules. I don't know why this happens, but for now I'm just desperately copy+paste the necessary modules code into my game agent plugin. I'll see if I can solve this later. This seems to be unrelated with PATH, since the directory has already been added to the PATH variable.
2.Offshoot code problems:
UnicodeDecodeError: 'charmap' codec can't decode byte 0x90 in position 2907500: character maps to
The source of this problem seems to be in Anaconda\envs[Env-Name]\Lib\site-packages\offshoot\base.py in the following function: `def file_contains_pluggable(file_path, pluggable): plugin_class = None
The command
open(file_path, "r")
have to receive the argumentencoding='utf8'
so the problem can be solved.3. Problems with window size detection
I was having problems with the error
LinAlgError('SVD did not converge')
when I tried to use serpent.ocr.perform_ocr. I decided to check the game frames being captured by plotting them when handle_play was called.This was the result
As you can see in comparison to the first image, many information has been lost. This happens due to a malfunction in win32gui module, which detects the window of the application being used and detects its size. However, for some motive, instead of detecting the correct window size(1020, 680), it was detecting (816, 544). That way, I can't extract lifes, power, aura, not even the score. Thus, reward system is impossible.
This problem was resolved by accessing Anaconda/envs/Serpent/Lib/site-packages/serpent/window_controllers and then, in my case, opening win32_window_controller.py and adding a single line:
import pyautogui
. It looks like win32gui and PyGetWindow have the same problem in this situation: they can't get the window size correctly.Fixed image
4. Torch tensors incompatibility when loading a model weights
When trying to load a model weights with RainbowDQN, one would receive the following error:
RuntimeError: Input type (torch.FloatTensor) and weight type (torch.cuda.FloatTensor) should be the same or input should be a MKLDNN tensor and weight is a dense tensor
This error is caused because the inputs are being stored in the CPU, while the weights are stored in the GPU. I don't know why this happens, maybe the model is able to work with CPU inputs while the model is in the GPU and the error just appears when loading the weights. Maybe the model automatically pass the CPU inputs to the GPU when training for the first time, something that wouldn't happen when loading the weights.
Whatever may be causing this problem, it seems to be solved by checking the line 170 in
/serpent/machine_learning/reinforcement_learning/agents/rainbow_dqn_agent
:This command, contrarying what to be expected according to the docs, seems to be returning a torch.FloatTensor instead of a torch.cuda.FloatTensor. The issue has been solved by adding the argument
device=self.device