Open anugrahasinha opened 4 years ago
Really do not matter looks like though i do not have answers what advantage it has with or without flatten. There are some cases where implementation do not use Flatten but did not state any reason behind that for example YOLO (source https://datascience.stackexchange.com/questions/27135/should-there-be-a-flat-layer-in-between-the-conv-layers-and-dense-layer-in-yolo). Here is the summary of my network around dense layer
conv2d_9 (Conv2D) (None, 28, 28, 512) 2359808
batch_normalization_9 (Batch (None, 28, 28, 512) 2048
max_pooling2d_3 (MaxPooling2 (None, 14, 14, 512) 0
conv2d_10 (Conv2D) (None, 14, 14, 512) 2359808
batch_normalization_10 (Batc (None, 14, 14, 512) 2048
conv2d_11 (Conv2D) (None, 14, 14, 512) 2359808
batch_normalization_11 (Batc (None, 14, 14, 512) 2048
encoder (Dense) (None, 14, 14, 16) 8208
conv2d_12 (Conv2D) (None, 14, 14, 256) 37120
batch_normalization_12 (Batc (None, 14, 14, 256) 1024
conv2d_13 (Conv2D) (None, 14, 14, 256) 590080
batch_normalization_13 (Batc (None, 14, 14, 256) 1024
conv2d_14 (Conv2D) (None, 14, 14, 128) 295040
batch_normalization_14 (Batc (None, 14, 14, 128) 512
conv2d_15 (Conv2D) (None, 14, 14, 128) 147584
batch_normalization_15 (Batc (None, 14, 14, 128) 512
up_sampling2d (UpSampling2D) (None, 28, 28, 128) 0
conv2d_16 (Conv2D) (None, 28, 28, 64) 73792
However i have experimented with Flatten() and reshape() and i do not see any change using them in accuracy but there is some difference in no.of parameters. However i am yet to conclude on this and post my observations on this point as soon as i find.
Thank you very much for point out this.
@VeeranjaneyuluToka
Please let me know if my understanding is correct here or not?
Encoder - Dense connection without Flatten
Decoder : Input does not reshape the encoded input before applying a Conv layer