Closed avisekiit closed 8 years ago
Adding objective_l2=0.0001
to the network parameters should do the job.
Hi @BenjaminBossan
i followed your advise but got the following error message::
File "posttrain_dropout.py", line 110, in
My network looks like this::
posttrain_net_dropout = NeuralNet( layers=[ ('input', layers.InputLayer), ('conv1', layers.Conv2DLayer), ('pool1', layers.MaxPool2DLayer), ('dropout1', layers.DropoutLayer), ('conv2', layers.Conv2DLayer), ('pool2', layers.MaxPool2DLayer), ('dropout2', layers.DropoutLayer), ('conv3', layers.Conv2DLayer), ('pool3', layers.MaxPool2DLayer), ('dropout3', layers.DropoutLayer), ('hidden4', layers.DenseLayer), ('dropout4', layers.DropoutLayer), ('maxout6',layers.FeaturePoolLayer), ('dropout5', layers.DropoutLayer), ('output', layers.DenseLayer), ], input_shape=(None, 3, image_size, image_size), conv1_num_filters=32, conv1_filter_size=(5, 5), pool1_pool_size=(2, 2), dropout1_p=0.45, conv2_num_filters=64, conv2_filter_size=(3, 3), pool2_pool_size=(2, 2), dropout2_p=0.45, conv3_num_filters=64, conv3_filter_size=(3, 3), pool3_pool_size=(2, 2), dropout3_p=0.45, hidden4_num_units=map_size_map_size_2, dropout4_p=0.45, maxout6_pool_size=2,output_num_units=map_size*map_size,output_nonlinearity=None, dropout5_p=0.45, update_learning_rate=theano.shared(float32(0.05)), update_momentum=theano.shared(float32(0.9)),
regression=True,
objective_l2=0.01,
on_epoch_finished=[
AdjustVariable('update_learning_rate', start=0.05, stop=0.0001),
AdjustVariable('update_momentum', start=0.9, stop=0.999),
store_weights(),
],
batch_iterator_train=BatchIterator(batch_size=128),
max_epochs=1200,
verbose=1,
)
####################################################
Is it possible that you don't have the most recent version of nolearn? Try installing directly from github.
Hi,
I tried to update my version of Nolearn but somehow I messed up all my settings. Specifically, when I installed latest version of NoLearn, I was prompted that my Theano is too old.
Can you kindly guide to find me the links for the appropriate versions of
Also, should I write objective_l2=0.0001 after any specific layer declaration. And, is there any specific model required to be called for this operation.
Waiting in anticipation.
Regards, Avisek
On Sat, Jan 23, 2016 at 8:23 PM, Benjamin Bossan notifications@github.com wrote:
Is it possible that you don't have the most recent version of nolearn? Try installing directly from github.
— Reply to this email directly or view it on GitHub https://github.com/dnouri/nolearn/issues/199#issuecomment-174191827.
Following these instructions should result in the right versions. The objective_l2
parameter will apply weight decay to all layers equally.
Hi All, I am implementing a CNN using Lasagne-Nolearn framework A simple example is shown below.
net2 = NeuralNet( layers=[ ('input', layers.InputLayer), ('conv1', layers.Conv2DLayer), ('pool1', layers.MaxPool2DLayer), ('conv2', layers.Conv2DLayer), ('pool2', layers.MaxPool2DLayer), ('conv3', layers.Conv2DLayer), ('pool3', layers.MaxPool2DLayer), ('hidden4', layers.DenseLayer), ('hidden5', layers.DenseLayer), ('output', layers.DenseLayer), ], input_shape=(None, 1, 96, 96), conv1_num_filters=32, conv1_filter_size=(3, 3), pool1_pool_size=(2, 2), conv2_num_filters=64, conv2_filter_size=(2, 2), pool2_pool_size=(2, 2), conv3_num_filters=128, conv3_filter_size=(2, 2), pool3_pool_size=(2, 2), hidden4_num_units=500, hidden5_num_units=500, output_num_units=30, output_nonlinearity=None,
How can I incorporate L2 regularization under this framework?
-Thanks Avisek