duggalrahul / AlexNet-Experiments-Keras

Code examples for training AlexNet using Keras and Theano
MIT License
107 stars 53 forks source link

get_alexnet scale error #13

Open calugo opened 6 years ago

calugo commented 6 years ago

Hello, When running the Jupyter notebook, I cannot start alexnet at all, I always get the set of errors atached:

Subtensor{int64}.0, Elemwise{add,no_inplace}.0, Elemwise{add,no_inplace}.0, Subtensor{int64}.0) (Subtensor{int64}.0, Elemwise{add,no_inplace}.0, Elemwise{add,no_inplace}.0, Subtensor{int64}.0) (Subtensor{int64}.0, Elemwise{add,no_inplace}.0, Elemwise{add,no_inplace}.0, Subtensor{int64}.0) (Subtensor{int64}.0, Elemwise{add,no_inplace}.0, Elemwise{add,no_inplace}.0, Subtensor{int64}.0)

ValueError Traceback (most recent call last)

in () ----> 1 alexnet = get_alexnet(input_size,nb_classes,mean_flag) 2 3 #print alexnet.summary() /home/carlos/alexnet/AlexNet-Experiments-Keras/Code/alexnet_base.pyc in get_alexnet(input_shape, nb_classes, mean_flag) 63 64 dense_1 = Flatten(name="flatten")(dense_1) ---> 65 dense_1 = Dense(4096, activation='relu',name='dense_1',init='he_normal')(dense_1) 66 dense_2 = Dropout(0.5)(dense_1) 67 dense_2 = Dense(4096, activation='relu',name='dense_2',init='he_normal')(dense_2) /home/carlos/.conda/envs/ipykernel_py2/lib/python2.7/site-packages/keras/engine/topology.pyc in __call__(self, x, mask) 541 '`layer.build(batch_input_shape)`') 542 if len(input_shapes) == 1: --> 543 self.build(input_shapes[0]) 544 else: 545 self.build(input_shapes) /home/carlos/.conda/envs/ipykernel_py2/lib/python2.7/site-packages/keras/layers/core.pyc in build(self, input_shape) 750 name='{}_W'.format(self.name), 751 regularizer=self.W_regularizer, --> 752 constraint=self.W_constraint) 753 if self.bias: 754 self.b = self.add_weight((self.output_dim,), /home/carlos/.conda/envs/ipykernel_py2/lib/python2.7/site-packages/keras/engine/topology.pyc in add_weight(self, shape, initializer, name, trainable, regularizer, constraint) 413 ''' 414 initializer = initializations.get(initializer) --> 415 weight = initializer(shape, name=name) 416 if regularizer is not None: 417 self.add_loss(regularizer(weight)) /home/carlos/.conda/envs/ipykernel_py2/lib/python2.7/site-packages/keras/initializations.pyc in he_normal(shape, name, dim_ordering) 66 fan_in, fan_out = get_fans(shape, dim_ordering=dim_ordering) 67 s = np.sqrt(2. / fan_in) ---> 68 return normal(shape, s, name=name) 69 70 /home/carlos/.conda/envs/ipykernel_py2/lib/python2.7/site-packages/keras/initializations.pyc in normal(shape, scale, name) 35 36 def normal(shape, scale=0.05, name=None): ---> 37 return K.random_normal_variable(shape, 0.0, scale, name=name) 38 39 /home/carlos/.conda/envs/ipykernel_py2/lib/python2.7/site-packages/keras/backend/theano_backend.pyc in random_normal_variable(shape, mean, scale, dtype, name) 181 182 def random_normal_variable(shape, mean, scale, dtype=None, name=None): --> 183 return variable(np.random.normal(loc=0.0, scale=scale, size=shape), 184 dtype=dtype, name=name) 18 mtrand.pyx in mtrand.RandomState.normal() ValueError: scale < 0 Which seem to be ultimately related to the function random_normal_variable(), my question is how is the scale parameter calculated as it is always taking negative values. Is this a known bug? Thanks in advance. Carlos