sebtheiler / tutorials

All of the code for my Medium articles
https://medium.com/@sebastiankt9
MIT License
134 stars 107 forks source link

Question regarding the dueling network architecture part #8

Open huanpass opened 3 years ago

huanpass commented 3 years ago

Hi, I found below code in the network part of train_dqn.py

###########################################################

Split into value and advantage streams

val_stream, adv_stream = Lambda(lambda w: tf.split(w, 2, 3))(x) # custom splitting layer ##############################################################################

It looks like the source from hidden network was divided into 2 different partial parts then one feed to state value, another one to adv value. I have also checked other implementations and paper. It looks like each flow should be the complete copy of the hidden layer rather than partial of it. Can i ask why you want to split it rather than feed the same whole data flow to both stat and adv?

Many thanks! Edward

sebtheiler commented 3 years ago

The whole data flow is indeed fed to both the intial value and advantage streams. After that, there are seperate dense layers for the final calculations. The Lambda layer is just for slicing the previous conv layer into the val and adv streams, as is done in Wang et al. 2016.

Sorry for the late reply, thank you for your patience. If you have any more questions or if this wasn't clear enough, please let me know and I'll try to get back to you as soon as possible.

huanpass commented 3 years ago

Hi, thanks for your reply,

The question is why you want to slice it rather than just share the same flow among value and advantage? Because the flow feed to value and advantage should be the same. There is also no slice operation in the paper. For example `

x = Conv2D(64, (3, 3), strides=1, kernel_initializer=VarianceScaling(scale=2.), activation='relu', use_bias=False)(x)
x = Conv2D(1024, (7, 7), strides=1, kernel_initializer=VarianceScaling(scale=2.), activation='relu', use_bias=False)(x)
val_stream = Flatten()(x)
val = Dense(1, kernel_initializer=VarianceScaling(scale=2.))(val_stream)

adv_stream = Flatten()(x)
adv = Dense(n_actions, kernel_initializer=VarianceScaling(scale=2.))(adv_stream)`
sebtheiler commented 3 years ago

I believe there is a slice/split operation in the paper:

Our network architecture has the same low-level convolutional structure of DQN... As shown in Figure 1, the dueling network splits into two streams of fully connected layers. The value and advantage streams both have a fullyconnected layer with 512 units. The final hidden layers of the value and advantage streams are both fully-connected with the value stream having one output and the advantage as many outputs as there are valid actions. We combine the value and advantage streams using the module described by Equation (9). Rectifier non-linearities (Fukushima, 1980) are inserted between all adjacent layers.

(From 4.2, page 6)

I might be wrong about this (and it would undoubtedly be interesting to experiment with the architecture you detailed), but this is how I personally interpreted the paper.