Open Suryavf opened 8 years ago
Hi, for binary classification, what is the simplest is just to have two classes in softmax. In your case you are however doing something completely different:
net.layers{end+1} = struct('type' , 'softmax') ;
net.layers{end+1} = struct('type' , 'loss',...
'class' , [1 -1]) ;
You are effectively stacking softmax and then on top of that a softmaxlogloss (as that is the default configuration of the loss layer). So just remove the softmax layer.
The field class
is then set up in cnn_train
and is the GT label (see also vl_simplenn
).
So, basically just remove the softmax and the class
field from the loss layer. Also, the binary error probably wouldn't work because it expects labels [-1, +1]
whereas for softmaxlogloss you need [1, 2]
. In a similar way you probably need to adjust the getBatch
function so it also returns labels [1, 2]
...
Thanks for your help, I tasted your suggestions but I have not succeeded. The misclassification remains constant.
Training parameters I used: Learning rate: 0.001 Weight decay: 0.0005 Momentum: 0.9
The result obtained with epoch 30: http://s22.postimg.org/x18j9yo2p/image.png
The result obtained with epoch 3: http://s27.postimg.org/95cj612ab/image.png
Welcome to deep learning, nothing ever works for the first time ;)
The performance of the model depends on large amount of things - mainly amount of 'original' training data (note that the MNIST model, which is one of the smallest deep models is trained on 60 000 examples with 10 classes), so unless you have at least similar amount of data it is really hard to achieve improvements compare to the traditional machine learning formulations when training from scratch (e.g. smoothness of the manifolds etc.).
What sort of data are you feeding it with? In general the architecture is a bit strange (projecting only to 6-dimensional space?, what sort of invariances do you expect in your data? Do you really assume that spatial invariance is only up to patches of size ~5px? Are you sure that a single fully connected layer would be able to be spatially invariant enough to a grid of size 13 x 37 when a much larger larger network trained on 1e6 images do have cca 13x13 FC grid with 3 fully connected layers?). These are all really difficult decisions which one has to make in order to create a new working architecture.
What is in general much better idea, especially if you start with CNNs is to use an existing network and start to fine-tune it. In this sense you can also find a much easier baseline when e.g. training a linear classifier on top of the extracted features from an existing network. This gradual approach also helps a lot to be able to get the feeling for the 'dimensionality' of the problem, which is needed to correctly pick the number of projections per layer, spatial sizes etc.
So, in this regard, I can only wish a good luck in the search for the right hype-parameters! :)
I'm working on developing a biometric system based on EEG. As a first step, I'm replicating the work of Lan Ma ieeexplore.ieee.org/xpl/articleDetails.jsp?arnumber=7318985
I use the network topology developed by Lan Ma. I should have the same result. The network is trained with 50 samples per class and is evaluated with 5 samples per class. It is a total of 500 training samples and 50 samples for evaluation.
You think that's enough?
@Suryavf Did you solve this problem? I have been trying binary classification but I got errors much higher than 1 like in your case. It seems to be that there is something else to modify.
@dbparedes may this can help you... https://github.com/vlfeat/matconvnet/issues/48
Hello,
I am new to matconvnet and I am developing a binary CNN. I have no compilation errors, but the results are meaningless. Misclassification for evaluation remains constant at all epoch. In addition, misclassification for evaluation is always double epochs. I tested with different data but the result is the same.
What could be the cause?