Closed ruining99 closed 5 years ago
I think that a sampling rate of 4096 Hz is fine for initial work--it should be faster to train than 8192 Hz.
I agree that the pooling should be 1D, though I'm not sure whether max or average pooling is more appropriate here. Unless Dr. Markakis has a specific suggestion, I would try both.
I think that it's best to replicate Daniel George's architecture as closely as possible, at least at first, so you can just use the default linear activation in the Keras convolution layer and apply the ReLU activation after pooling.
I think that a drop-out layer might not be a good substitute for pooling, since one would be increasing the size of the full neural network, so it might become very expensive to train.
I'll let Dr. Markakis comment on the other items.
I found a stack-exchange post on the sequence of pooling layers and activation layers. It seems like the sequence yields the same result in our case since ReLU (the activation function) we are using is a monotonely increasing non-linearity. But since the computational cost is lower if there are less neurons, I will apply the pooling layer first.
The post is here: https://stackoverflow.com/questions/35543428/activation-function-after-pooling-layer-or-convolutional-layer
I constructed a 1D CNN classifier with Daniel George's model as a reference (below)
My code location in repository: \CNN\Classifier-Stage-1
There are several things I want to check:
Thank you! Ruining