germain-hug / Deep-RL-Keras

Keras Implementation of popular Deep RL Algorithms (A3C, DDQN, DDPG, Dueling DDQN)
528 stars 149 forks source link

A3C issue when using environments with <= 2 dims. #8

Closed rodrigomfw closed 5 years ago

rodrigomfw commented 5 years ago
`    def buildNetwork(self):
        """ Assemble shared layers
        """
        inp = Input((self.env_dim))
        # If we have an image, apply convolutional layers
        if(len(self.env_dim) > 2):
            x = Reshape((self.env_dim[1], self.env_dim[2], -1))(inp)
            x = conv_block(x, 32, (2, 2))
            x = conv_block(x, 32, (2, 2))
            x = Flatten()(x)
        else:
            x = Flatten()(inp)
            x = Dense(64, activation='relu')(x)
            x = Dense(128, activation='relu')(x)
        return Model(inp, x)``

This won't work if len(self._env_dim) <= 2. A similar error is raised: ValueError: Input 0 is incompatible with layer flatten_5: expected min_ndim=3, found ndim=2.

I am using an environment with each state being a 1d array with 28 elements. The Input of the network does not take into consideration the number of training samples, so I have to define self.env_dim = (28,) which does not work followed by a Flatten layer. My solution was to remove the Flatten layer.

germain-hug commented 5 years ago

Hi and sorry about the late reply, Thank you for reporting that I just added compatibility for 1D arrays, It should work now