Farama-Foundation / ViZDoom

Reinforcement Learning environments based on the 1993 game Doom :godmode:
https://vizdoom.farama.org/
1.72k stars 397 forks source link

Issues running Doom program from Deep learning course #414

Closed GhettoBurger996 closed 4 years ago

GhettoBurger996 commented 4 years ago

I am having issues running the code below.

https://github.com/simoninithomas/Deep_reinforcement_learning_Course/tree/master/Deep%20Q%20Learning/Doom

This may not be the correct place to ask this question, but I've tried asking this question on the deep learning course itself, reddit and various other forums. Unfortunately, there is no active forum where people can discuss ViZDoom related issues other then here.

I get an error on the 10th cell of the tutorial. The error states

ValueError: ('Cannot warp empty image with dimensions', (0, 180, 320))

doom1

The Value error comes from scikit-image, I've tried installing several different version but keep getting the same error. When I go back to version 1.14.5 I get other errors but the program still does not run. Amongst other attempted solutions involve installing several different version of tensor flow, reinstalling everything (via virtualenv) but still nothing.

I've been meaning to learn Deep leaning and really would like to do it with the help of Doom, I've been struggling to solve this issue for the past 3 weeks.

I'd appreciate any help, thanks!

Miffyli commented 4 years ago

Hard to say from this info alone what could be wrong. Try checking what the variables used here contain (i.e. print normalized_frame inside the preprocess_frame function). Debugging issues like this and getting around them is quite big part of "practical deep learning" when you have all kinds of transformations on matrices :)

GhettoBurger996 commented 4 years ago

When I print out both normalized_frame and preprocessed_frame they both return empty array []. I'm no stranger when it comes to debugging, the several 100 or so times my programs never work, but this doesn't make any sense....

Something I didn't mentoin sometime last week I magically got it working only for this to happen again, and for the life of me cant remember how I fixed it.

Miffyli commented 4 years ago

For some reason the array you feed to transform.resize has an axis with zero dimensionality, essentially the whole matrix does not exist (how is this even allowed, I am not sure).

Check what state is like when fed to stack_frames (in the beginning of the function). Could be random None's there or odd arrays. Looks like that function is called in many places.

GhettoBurger996 commented 4 years ago

state returns the following values when printed.

[[[35 39 39 ... 39 39 39]
  [59 67 59 ... 67 67 91]
  [79 79 79 ... 79 91 91]
  ...
  [19 19 11 ... 47 47 55]
  [19 27 19 ... 47 47 47]
  [11 19 19 ... 27 19 19]] 

 [[35 39 39 ... 39 39 39]
  [59 67 59 ... 67 67 91]
  [79 79 79 ... 79 91 91]
  ...
  [19 19 11 ... 47 47 55]
  [19 27 19 ... 47 47 47]
  [11 19 19 ... 27 19 19]]

 [[35 39 39 ... 39 39 39]
  [59 67 59 ... 67 67 91]
  [79 79 79 ... 79 91 91]
  ...
  [19 19 11 ... 47 47 55]
  [19 27 19 ... 47 47 47]
  [11 19 19 ... 27 19 19]]]

It is called in many places, removing it or attempting to adjust it will cause several issues, which will then cause changes in the deep learning section (which is essentially what I want to keep the same to learn, while testing and fiddling around with it in a working condition of course).

The last time I got it working didn't involved changing any of the code, it just worked one day.

GhettoBurger996 commented 4 years ago

I'm unclear of the events of the following code but thought I should mention it

def stack_frames(stacked_frames, state, is_new_episode):
    # Preprocess frame
    print(1, state)
    frame = preprocess_frame(state)
    print(2, state)

The above code is the first print statement, the second does not print at all, any other print statement do not work in this function after the line frame = preprocess_frame(state).

Not sure if it is worth mentioning or not.

Miffyli commented 4 years ago

I would continue printing up until the error occurs, and instead of printing the whole array print the shape. Replace the preprocess_frames with this and see what happens:

def preprocess_frame(frame):
    # Greyscale frame already done in our vizdoom config
    # x = np.mean(frame,-1)
    print(frame.shape)
    # Crop the screen (remove the roof because it contains no information)
    cropped_frame = frame[30:-10,30:-30]
    print(cropped_frame.shape)
    # Normalize Pixel Values
    normalized_frame = cropped_frame/255.0
    print(normalized_frame.shape)
    # Resize
    preprocessed_frame = transform.resize(normalized_frame, [84,84])
    print(preprocessed_frame.shape)
    return preprocessed_frame

If this hangs (like you just wrote) or throws some odd errors while all the values seem right, then I do not know what to say. Perhaps notebooks are interfering with the system somehow (hidden states or not just compatible?) or some of the libraries acting up.

GhettoBurger996 commented 4 years ago

Perhaps. I'll keep testing out everything and hopefully find out whats wrong, wont be online for a few hours, will update if I find out anything new. Thanks for the help, appreciate it!

Kenny-Snub-Nose-Monk commented 4 years ago

Any Solutions??

GhettoBurger996 commented 4 years ago

Sorry for the late reply just noticed the reply.

There nothing particularly wrong with the code but the tech stack and version of numpy I was using was of, plus I had CUDA installed incorrectly. Quite a few issues to list them all down here. But my recommendation would be to keep testing different version of numpy until you get it right.

If you'd like I can help, just drop me your discord ID!

romulofff commented 4 years ago

Hello @GhettoBurger996, I had the same problem as you while doing this course. I manage to solve it by changing the preprocess_frame function to the following:

def preprocess_frame(frame):
    # Greyscale frame already done in our vizdoom config

    x = np.mean(frame, 0)

    # Crop the screen (remove the roof because it contains no information)
    cropped_frame = x[30:-10,30:-30]

    # Normalize Pixel Values
    normalized_frame = cropped_frame/255.0

    # Resize
    preprocessed_frame = transform.resize(normalized_frame, (84,84))

    return preprocessed_frame

Notice the change from np.mean(frame, -1) to np.mean(frame, 0). Also, in the original from Thomas Simonini this line is commented. You should remove the comment.

After that, the stack frames function shouldn't be preprocessing every state it receives. So remove the line frame = preprocess_frame(state) and use the state as the frame frame = state.

def stack_frames(stacked_frames, state, is_new_episode):
    # Preprocess frame
    frame = state
    if is_new_episode:
        # Clear our stacked_frames
        stacked_frames = deque([np.zeros((84,84), dtype=np.int) for i in range(stack_size)], maxlen=4)

        # Because we're in a new episode, copy the same frame 4x
        stacked_frames.append(frame)
        stacked_frames.append(frame)
        stacked_frames.append(frame)
        stacked_frames.append(frame)

        # Stack the frames
        stacked_state = np.stack(stacked_frames, axis=2)

    else:
        # Append frame to deque, automatically removes the oldest frame
        stacked_frames.append(frame)

        # Build the stacked state (first dimension specifies different frames)
        stacked_state = np.stack(stacked_frames, axis=2) 

    return stacked_state, stacked_frames

Then after every call of game.get_state().screen_buffer you should preprocess the returning frame, as follows:

state = game.get_state().screen_buffer
state = preprocess_frame(state)

I'm uploading my code to this repository as I improve it.

GhettoBurger996 commented 4 years ago

Thanks a bunch for the reply @romulofff !

Its been so long since I actually worked on the project, seems like this will be an excuse to go at it again. I'll consider the subject closed and hopefully should help others who face similar issues.

Appreciate the link to the repo as well btw!