kristjankorjus / Replicating-DeepMind

Reproducing the results of "Playing Atari with Deep Reinforcement Learning" by DeepMind
GNU General Public License v3.0
652 stars 207 forks source link

Pre-processing too slow #4

Open kristjankorjus opened 10 years ago

kristjankorjus commented 10 years ago

for each frame, the loop in preprocessor.py must run for 210*160 times, which is bit inefficient:

Fill the PIL image object with the correct pixel values

    for i in range(len(image_string)/2):
        num_rows = i % width
        num_cols = i / width
        hex1 = int(image_string[i*2], 16)

        # Division by 2 because: http://en.wikipedia.org/wiki/List_of_video_game_console_palettes
        hex2 = int(image_string[i*2+1], 16)/2
        gray_val = int(arr[hex2, hex1])
        pixels[num_rows, num_cols] = (gray_val, gray_val, gray_val)

    # Crop and downscale image
    roi = (0, 33, 160, 193)  # region of interest is lines 33 to 193
    img = img.crop(roi)
    new_size = 84, 84
    img.thumbnail(new_size)
RDTm commented 10 years ago

1)first we should crop the string as to remove first 33 lines and last (210-193) lines. If the string is filled with pixels line by line, this should be sth like:

cropped_pixs=pixs[160_33_2:160_193_2]

2)dividing the string into substrings of length 2: hexs=[cropped_pixs[i_2:i_2+2] for i in range(len(cropped_pixs)/2)]

3)getting the grayscale values

map(lambda x: my_array[int(x[0],16),int(x[1],16)],hexs)

4)np.reshape to 160*160 5)thumbnail

taivop commented 10 years ago

(Probably) parallelised with commit bd6fc46b486a4e97c967c3d56ff811b308925314.

Needs profiling.