jorgenkg / python-neural-network

This is an efficient implementation of a fully connected neural network in NumPy. The network can be trained by a variety of learning algorithms: backpropagation, resilient backpropagation and scaled conjugate gradient learning. The network has been developed with PYPY in mind.
BSD 2-Clause "Simplified" License
297 stars 98 forks source link

unable to save a model #9

Closed omerarshad closed 8 years ago

omerarshad commented 8 years ago

When you try to save trained weights, it gives error as some parameters are not initialized.

jorgenkg commented 8 years ago

Thanks for the heads-up, @omerarshad. This was a lagging bug since the code was refactored. I've pushed a fix.

omerarshad commented 8 years ago

till when it will be fixed?

jorgenkg commented 8 years ago

It was fixed about one hour ago. However, the network the previous structure of the saved file does not play nice with the new code. If you're unable to retrain the network, the solution would be to checkout this commit version instead: b6c6b9934668f8c496c7443e48da8080294aa6b9

omerarshad commented 8 years ago

One more question, how to use your code for multi class prediction?

On Thu, Mar 17, 2016 at 4:23 PM, Jørgen Grimnes notifications@github.com wrote:

It was fixed about one hour ago. However, the network the previous structure of the saved file does not play nice with the new code. If you're unable to retrain the network, the solution would be to checkout this commit version instead: b6c6b9934668f8c496c7443e48da8080294aa6b9 https://github.com/jorgenkg/python-neural-network/tree/b6c6b9934668f8c496c7443e48da8080294aa6b9

— You are receiving this because you were mentioned. Reply to this email directly or view it on GitHub https://github.com/jorgenkg/python-neural-network/issues/9#issuecomment-197831631

jorgenkg commented 8 years ago

You can define how many output signals the network is supposed to have by configuring the layers parameter in the settings dictionary. The definition below will result in a network with a hidden layer consisting of two nodes, and a single output node. Since the network only has a single output node, the network will output a single value.

settings    = {
    "layers"                : [ (2, tanh_function), (1, sigmoid_function) ],
}
network = NeuralNet( settings )

To make the network to return more than one value (or class, in your case), increment the number in the last tuple (eg. three output signals):

settings    = {
    "layers"                : [ (2, tanh_function), (3, sigmoid_function) ],
}
network = NeuralNet( settings )
omerarshad commented 8 years ago

it gives error when we declare

(3, sigmoid_function) ],

Error : "ERROR: output size varies from the defined output setting"

On Fri, Mar 18, 2016 at 8:32 PM, Jørgen Grimnes notifications@github.com wrote:

You can define how many output signals the network is supposed to have by configuring the layers parameter in the settings dictionary. The definition below will result in a network with a hidden layer consisting of two nodes, and a single output node. Since the network only has a single output node, the network will output a single value.

settings = { "layers" : [ (2, tanh_function), (1, sigmoid_function) ], } network = NeuralNet( settings )

To make the network to return more than one value (or class, in your case), increment the number in the last tuple (eg. three output signals):

settings = { "layers" : [ (2, tanh_function), (3, sigmoid_function) ], } network = NeuralNet( settings )

— You are receiving this because you were mentioned. Reply to this email directly or view it on GitHub https://github.com/jorgenkg/python-neural-network/issues/9#issuecomment-198410191

jorgenkg commented 8 years ago

How is your dataset structured? You need to have equally many target values and output values

omerarshad commented 8 years ago

my data structure is given vectors of sentences and its similarity score between 1-5 ..

e.g. [0.222,0.23,0.23] , [2.5] )

On Fri, Mar 18, 2016 at 9:05 PM, Jørgen Grimnes notifications@github.com wrote:

How is your dataset structured? You need to have equally many target values and output values

— You are receiving this because you were mentioned. Reply to this email directly or view it on GitHub https://github.com/jorgenkg/python-neural-network/issues/9#issuecomment-198428651

jorgenkg commented 8 years ago

I interpret the example you provided as the input vector has size three ([0.222,0.23,0.23]) and the target vector has size 1 ([2.5]).

You can't configure the network to have more output signals than is present in the dataset you are using when training the network. The number of output signals must be static. If some of your dataset entries has more than one output signal, you will have to preprocess them into using the same number of target values.

From the example you provided, it appears to me that the network should only have a single output.

omerarshad commented 8 years ago

ok i got you.. any help regarding implementing softmax on outer layer?

On Fri, Mar 18, 2016 at 9:16 PM, Jørgen Grimnes notifications@github.com wrote:

I interpret the example you provided as the input vector has size three ( [0.222,0.23,0.23]) and the target vector has size 1 ([2.5]).

You can't configure the network to have more output signals than is present in the dataset you are using when training the network. The number of output signals must be static. If some of your dataset entries has more than one output signal, you will have to preprocess them into using the same number of target values.

From the example you provided, it appears to me that the network should only have a single output.

— You are receiving this because you were mentioned. Reply to this email directly or view it on GitHub https://github.com/jorgenkg/python-neural-network/issues/9#issuecomment-198431955

jorgenkg commented 8 years ago

Thats good to hear!

I'll might look into it in the future. Until then, feel free to create a PR and create a pull request if you feel up for a calculus challenge.

omerarshad commented 8 years ago

ok i'll try... I liked the way you implemented NN.. How to apply linear transformation? i want to convert 50 inputs to 5 outputs..

On Fri, Mar 18, 2016 at 9:38 PM, Jørgen Grimnes notifications@github.com wrote:

I'll might look into it in the future. Until then, feel free to create a PR and create a pull request if you feel up for a calculus challenge.

— You are receiving this because you were mentioned. Reply to this email directly or view it on GitHub https://github.com/jorgenkg/python-neural-network/issues/9#issuecomment-198444552

jorgenkg commented 8 years ago

Then you simply have to initialize the network as:

settings    = {
    "cost_function"         : sum_squared_error,
    "n_inputs"              : 50,
    "layers"                : [ (5, linear_function) ],
}

# initialize the neural network
network = NeuralNet( settings )

If you know that the output values is in the range [0, 1], then you may try to use the Hellinger distance or Cross Entropy cost functions.

omerarshad commented 8 years ago

These are my current settings where i want to implement softmax. any quick help regarding this? because i have to implement model which uses softmax to get probabilities. settings = {

"cost_function" : sum_squared_error,

"n_inputs" : 200,

"layers" : [ ( 50,sigmoid_function), (5, linear_function),( 1,softmax) ],

On Sun, Mar 20, 2016 at 1:12 PM, Jørgen Grimnes notifications@github.com wrote:

Then you simply have to initialize the network as:

settings = { "cost_function" : sum_squared_error, "n_inputs" : 50, "layers" : [ (5, linear_function) ], }

initialize the neural network

network = NeuralNet( settings )

— You are receiving this because you were mentioned. Reply to this email directly or view it on GitHub https://github.com/jorgenkg/python-neural-network/issues/9#issuecomment-198870236

jorgenkg commented 8 years ago

If you only have one output, softmax is the same as using sigmoid. So, in your case, just use sigmoid instead.

omerarshad commented 8 years ago

actually my output can be any number from 1-5.. so first i apply sigmoid, then linear transformation to transform hidden layer of size 50 to 5. then i want to apply softmax on this layer

On Sun, Mar 20, 2016 at 3:05 PM, Jørgen Grimnes notifications@github.com wrote:

If you only have one output, softmax is the same as using sigmoid. So, in your case, just use sigmoid instead.

— You are receiving this because you were mentioned. Reply to this email directly or view it on GitHub https://github.com/jorgenkg/python-neural-network/issues/9#issuecomment-198888564

omerarshad commented 8 years ago

A torch implementation of same model

:add(vecs_to_input) :add(nn.Linear(input_dim, self.sim_nhidden)) :add(nn.Sigmoid()) :add(nn.Linear(self.sim_nhidden, self.num_classes)) :add(nn.LogSoftMax())

On Sun, Mar 20, 2016 at 3:07 PM, omer arshad omer.arshad20@gmail.com wrote:

actually my output can be any number from 1-5.. so first i apply sigmoid, then linear transformation to transform hidden layer of size 50 to 5. then i want to apply softmax on this layer

On Sun, Mar 20, 2016 at 3:05 PM, Jørgen Grimnes notifications@github.com wrote:

If you only have one output, softmax is the same as using sigmoid. So, in your case, just use sigmoid instead.

— You are receiving this because you were mentioned. Reply to this email directly or view it on GitHub https://github.com/jorgenkg/python-neural-network/issues/9#issuecomment-198888564

jorgenkg commented 8 years ago

I've just updated the repo. The code now features a softmax activation function.

settings = {
    "cost_function"         : softmax_cross_entropy_cost,
    "n_inputs"              : 200,       # Number of network input signals
    "layers"                : [ (50, sigmoid_function), (5, softmax_function) ]
}

# initialize the neural network
network = NeuralNet( settings )

This will create a network with a single hidden layer of 50 nodes and an output layer with five nodes. You don't need to include the linear transformation layers you've used in torch.

omerarshad commented 8 years ago

hats off to you! Great job man

On Mon, Mar 21, 2016 at 6:02 PM, Jørgen Grimnes notifications@github.com wrote:

I've just updated the repo. The code now features a softmax activation function.

settings = { "cost_function" : softmax_cross_entropy_cost, "n_inputs" : 200, # Number of network input signals "layers" : [ (50, sigmoid_function), (5, softmax_function) ] }

initialize the neural network

network = NeuralNet( settings )

This will create a network with a single hidden layer of 50 nodes and an output layer with five nodes.

— You are receiving this because you were mentioned. Reply to this email directly or view it on GitHub https://github.com/jorgenkg/python-neural-network/issues/9#issuecomment-199261126

omerarshad commented 8 years ago

There is a problem when loading a saved model using softmax_cross_entropy function. Error is : assert not self.cost_function == softmax_cross_entropy_cost or self.layers[-1][1] == softmax_function,\ AttributeError: NeuralNet instance has no attribute 'cost_function'

On Mon, Mar 21, 2016 at 8:07 PM, omer arshad omer.arshad20@gmail.com wrote:

hats off to you! Great job man

On Mon, Mar 21, 2016 at 6:02 PM, Jørgen Grimnes notifications@github.com wrote:

I've just updated the repo. The code now features a softmax activation function.

settings = { "cost_function" : softmax_cross_entropy_cost, "n_inputs" : 200, # Number of network input signals "layers" : [ (50, sigmoid_function), (5, softmax_function) ] }

initialize the neural network

network = NeuralNet( settings )

This will create a network with a single hidden layer of 50 nodes and an output layer with five nodes.

— You are receiving this because you were mentioned. Reply to this email directly or view it on GitHub https://github.com/jorgenkg/python-neural-network/issues/9#issuecomment-199261126

jorgenkg commented 8 years ago

Fixed! :)

omerarshad commented 8 years ago

great work !

On Tue, Mar 22, 2016 at 12:30 AM, Jørgen Grimnes notifications@github.com wrote:

Fixed! :)

— You are receiving this because you were mentioned. Reply to this email directly or view it on GitHub https://github.com/jorgenkg/python-neural-network/issues/9#issuecomment-199440787

omerarshad commented 8 years ago

Hello! I have a general question how to find delta of input layer? As my input is output of other NN and i need to back propagate error to back to previous NN. Any help will be appreciated.

Thanks, Omer

On Tue, Mar 22, 2016 at 10:32 PM, omer arshad omer.arshad20@gmail.com wrote:

great work !

On Tue, Mar 22, 2016 at 12:30 AM, Jørgen Grimnes <notifications@github.com

wrote:

Fixed! :)

— You are receiving this because you were mentioned. Reply to this email directly or view it on GitHub https://github.com/jorgenkg/python-neural-network/issues/9#issuecomment-199440787

jorgenkg commented 8 years ago

Are you sure you wish to propagate the delta out of the network, instead of training the networks separately? It sounds like you maybe overcomplicate your project. Deep neural networks are error prone, and they are therefore often (pre)trained in a greedy layerwise manner. Or in your case, network-wise manner.

Nevertheless, if you add the function below to the NeuralNet.class, the code will generate the deltas for the input layer.

def first_layer_deltas(self, trainingset ):
        training_data              = np.array( [instance.features for instance in trainingset ] )
        training_targets           = np.array( [instance.targets  for instance in trainingset ] )

        layer_indexes              = range( len(self.layers) )[::-1]    # reversed
        input_signals, derivatives = self.update( training_data, trace=True )

        out                        = input_signals[-1]
        cost_derivative            = self.cost_function(out, training_targets, derivative=True).T
        delta                      = cost_derivative * derivatives[-1]

        for i in layer_indexes:
            # Loop over the weight layers in reversed order to calculate the deltas

            # Skip the bias weight
            weight_delta = np.dot( self.weights[ i ][1:,:], delta )

            # Calculate the delta for the subsequent layer
            delta = weight_delta * (1 if i == 0 else derivatives[i-1])
        #end weight adjustment loop

        return delta
#end

The function should then be called with a trainingset as the only argument:

network.first_layer_deltas( training_one )

Note: I haven't tested this code

omerarshad commented 8 years ago

Yes i have no other option than this. My unsupervised training is based on supervised training. So I have to add delta of supervised part to unsupervised part.

On Tue, Mar 29, 2016 at 2:09 AM, Jørgen Grimnes notifications@github.com wrote:

Are you sure you wish to propagate the delta out of the network, instead of training the networks separately? It sounds like you maybe overcomplicate your project. Deep neural networks are error prone, and they are therefore often (pre)trained in a greedy layerwise manner. Or in your case, network-wise manner.

Nevertheless, if you add the function below to the NeuralNet.class, the code will generate the deltas for the input layer.

def first_layer_deltas(self, trainingset ): training_data = np.array( [instance.features for instance in trainingset ] ) training_targets = np.array( [instance.targets for instance in trainingset ] )

    layer_indexes              = range( len(self.layers) )[::-1]    # reversed
    input_signals, derivatives = self.update( training_data, trace=True )

    out                        = input_signals[-1]
    cost_derivative            = self.cost_function(out, training_targets, derivative=True).T
    delta                      = cost_derivative * derivatives[-1]

    for i in layer_indexes:
        # Loop over the weight layers in reversed order to calculate the deltas

        # Skip the bias weight
        weight_delta = np.dot( self.weights[ i ][1:,:], delta )

        # Calculate the delta for the subsequent layer
        delta = weight_delta * (1 if i == 0 else derivatives[i-1])
    #end weight adjustment loop

    return delta#end

Note: I haven't tested this code

— You are receiving this because you were mentioned. Reply to this email directly or view it on GitHub https://github.com/jorgenkg/python-neural-network/issues/9#issuecomment-202581830

omerarshad commented 8 years ago

Hello! What can be the problem if my error is not decreasing? it is stuck to some point and not decreasing anymore. I am using softmax cross entropy as cost function.

On Tue, Mar 29, 2016 at 12:57 PM, omer arshad omer.arshad20@gmail.com wrote:

Yes i have no other option than this. My unsupervised training is based on supervised training. So I have to add delta of supervised part to unsupervised part.

On Tue, Mar 29, 2016 at 2:09 AM, Jørgen Grimnes notifications@github.com wrote:

Are you sure you wish to propagate the delta out of the network, instead of training the networks separately? It sounds like you maybe overcomplicate your project. Deep neural networks are error prone, and they are therefore often (pre)trained in a greedy layerwise manner. Or in your case, network-wise manner.

Nevertheless, if you add the function below to the NeuralNet.class, the code will generate the deltas for the input layer.

def first_layer_deltas(self, trainingset ): training_data = np.array( [instance.features for instance in trainingset ] ) training_targets = np.array( [instance.targets for instance in trainingset ] )

    layer_indexes              = range( len(self.layers) )[::-1]    # reversed
    input_signals, derivatives = self.update( training_data, trace=True )

    out                        = input_signals[-1]
    cost_derivative            = self.cost_function(out, training_targets, derivative=True).T
    delta                      = cost_derivative * derivatives[-1]

    for i in layer_indexes:
        # Loop over the weight layers in reversed order to calculate the deltas

        # Skip the bias weight
        weight_delta = np.dot( self.weights[ i ][1:,:], delta )

        # Calculate the delta for the subsequent layer
        delta = weight_delta * (1 if i == 0 else derivatives[i-1])
    #end weight adjustment loop

    return delta#end

Note: I haven't tested this code

— You are receiving this because you were mentioned. Reply to this email directly or view it on GitHub https://github.com/jorgenkg/python-neural-network/issues/9#issuecomment-202581830

jorgenkg commented 8 years ago

The standard tricks are::

There are not bulletproof strategies - it might just be that your dataset is very difficult to learn.

omerarshad commented 8 years ago

I am optimizing weights of two networks using "L-bfgs". One network is doing well but the network i build using your code is stuck at error 1.5. This is howi get gradient and error. Weights are assigned after receiving from lbfgs

grad_sup += network.gradient(theta_supervised,training_data,training_targets)

sup_error += network.error(theta_supervised,training_data,training_targets)

On Tue, Apr 5, 2016 at 8:29 PM, Jørgen Grimnes notifications@github.com wrote:

The standard tricks are::

  • preprocessing
  • trying different activation functions (in the lower layers)
    • tried tanh?
  • try different learning functions

There are not bulletproof strategies - it might just be that your dataset is very difficult to learn.

— You are receiving this because you were mentioned. Reply to this email directly or view it on GitHub https://github.com/jorgenkg/python-neural-network/issues/9#issuecomment-205858593

jorgenkg commented 8 years ago

What is the theta_supervised term? The functions network.gradient() and network.error() takes weight_vector (a list of floats of size network.n_weights) as the first argument, in addition to the latter two which you've correctly specified.

Using the latest revision of the repo:

# Train the network using SciPy
from learning_algorithms import scipyoptimize
scipyoptimize(
        network,
        your_dataset,    # change this to your dataset
        method               = "L-BFGS-B",
        ERROR_LIMIT          = 1e-4,
    )

This snippet will successfully train the network using SciPy's Limited-memory BFGS implementation through scipy.optimize.minimize.

omerarshad commented 8 years ago

WE are training two networks, so we pass concatenated array of weights to "L-BFGS", theta_supervised are the weights of one network which we got after one iteration of LBFGS

main weight vector contains weights of two networks

On Wed, Apr 6, 2016 at 8:30 PM, Jørgen Grimnes notifications@github.com wrote:

What is the theta_supervised term? The functions network.gradient() and network.error() takes weight_vector (a list of floats of size network.n_weights) as the first argument, in addition to the latter two which you've correctly specified.

Using the latest revision of the repo:

Train the network using SciPyfrom learning_algorithms import scipyoptimize

scipyoptimize( network, your_dataset, # change this to your dataset method = "L-BFGS-B", ERROR_LIMIT = 1e-4, )

This snippet will successfully train the network using SciPy's Limited-memory BFGS implementation through scipy.optimize.minimize.

— You are receiving this because you were mentioned. Reply to this email directly or view it on GitHub https://github.com/jorgenkg/python-neural-network/issues/9#issuecomment-206430043

jorgenkg commented 8 years ago

I've never tried anything similar, so unfortunately I haven't got any suggestions. Do you get the weight vector of the network by calling get_weights()?

omerarshad commented 8 years ago

There is some problem . Is softmax cost function implemented correctly? there might be come error as i am unable to find why always my NN stuck at error 1.56

On Thu, Apr 7, 2016 at 12:08 AM, Jørgen Grimnes notifications@github.com wrote:

I've never tried anything similar, so unfortunately I haven't got any suggestions. Do you get the weight vector of the network by calling get_weights()?

— You are receiving this because you were mentioned. Reply to this email directly or view it on GitHub https://github.com/jorgenkg/python-neural-network/issues/9#issuecomment-206516674

jorgenkg commented 8 years ago

I believe you're onto something. I'll take a close look in the weekend, I haven't got too much spare time during the workweek. I took a look at the code now

jorgenkg commented 8 years ago

No, the softmax derivative appears to be correctly calculated. I'm struggling to understand why the network won't learn your dataset. Are you able to share your dataset? Have you tried to use tanh on the intermediate layers of the network?

omerarshad commented 8 years ago

i have a RAE, it outputs sentence vectors, these sentence vectors are given as input to next NN which uses Softmax layer to calculate to which class the sentence belongs.

Problem: the second network is created using your code, and it's error never decreases from 1.5, so dataset to supervised NN is output of unsupervised NN which produces sentence vectors.

On Thu, Apr 7, 2016 at 7:00 PM, Jørgen Grimnes notifications@github.com wrote:

No, the softmax derivative appears to be correctly calculated. I'm struggling to understand why the network won't learn your dataset. Are you able to share your dataset? Have you tried to use tanh on the intermediate layers of the network?

— You are receiving this because you were mentioned. Reply to this email directly or view it on GitHub https://github.com/jorgenkg/python-neural-network/issues/9#issuecomment-206920723

jorgenkg commented 8 years ago

Is this repo's network the last network in the chain of networks?

If you are using the softmax activation function, this network must be the last one in the chain. And you must use softmax_neg_loss as the cost function. This strict configuration limitation originates from how the derivatives for the softmax function are calculated.

omerarshad commented 8 years ago

yes

On Sun, Apr 17, 2016 at 6:14 PM, Jørgen Grimnes notifications@github.com wrote:

Is my network the last network in the chain of networks?

— You are receiving this because you were mentioned. Reply to this email directly or view it on GitHub https://github.com/jorgenkg/python-neural-network/issues/9#issuecomment-211014879

jorgenkg commented 8 years ago

May you please try again with the code I pushed to the repo? I've just merged the master branch with dev. This introduced a lot of improvements, specifically on the numerical stability of calculating derivatives for softmax.

omerarshad commented 8 years ago

ok i'll

On Sun, Apr 17, 2016 at 10:32 PM, Jørgen Grimnes notifications@github.com wrote:

May you please try again with the code I pushed to the repo? I've just merged the master branch with dev. This introduced a lot of improvements, specifically on the numerical stability of calculating derivatives for softmax.

— You are receiving this because you were mentioned. Reply to this email directly or view it on GitHub https://github.com/jorgenkg/python-neural-network/issues/9#issuecomment-211066628

omerarshad commented 8 years ago

what if i just copy cost and activation function?

On Sun, Apr 17, 2016 at 10:34 PM, omer arshad omer.arshad20@gmail.com wrote:

ok i'll

On Sun, Apr 17, 2016 at 10:32 PM, Jørgen Grimnes <notifications@github.com

wrote:

May you please try again with the code I pushed to the repo? I've just merged the master branch with dev. This introduced a lot of improvements, specifically on the numerical stability of calculating derivatives for softmax.

— You are receiving this because you were mentioned. Reply to this email directly or view it on GitHub https://github.com/jorgenkg/python-neural-network/issues/9#issuecomment-211066628

jorgenkg commented 8 years ago

From the top of my head, I believe that might work - no guarantees though 😊

omerarshad commented 8 years ago

i have changed cost function and activation function.. let's see if any miracle happens :)

On Sun, Apr 17, 2016 at 10:39 PM, Jørgen Grimnes notifications@github.com wrote:

From the top of my head, I believe that might work - no guarantees though 😊

— You are receiving this because you were mentioned. Reply to this email directly or view it on GitHub https://github.com/jorgenkg/python-neural-network/issues/9#issuecomment-211067019

omerarshad commented 8 years ago

softmax_cross_entropy_cost is changed to "softmax_neg_loss"?

On Sun, Apr 17, 2016 at 10:40 PM, omer arshad omer.arshad20@gmail.com wrote:

i have changed cost function and activation function.. let's see if any miracle happens :)

On Sun, Apr 17, 2016 at 10:39 PM, Jørgen Grimnes <notifications@github.com

wrote:

From the top of my head, I believe that might work - no guarantees though 😊

— You are receiving this because you were mentioned. Reply to this email directly or view it on GitHub https://github.com/jorgenkg/python-neural-network/issues/9#issuecomment-211067019

jorgenkg commented 8 years ago

Yes, that's right. In addition to changing the name, the calculation has changed from:

np.mean(-np.sum(targets * np.log( outputs ), axis=1))

to

-np.sum(targets * np.log( outputs ))

The update (removing np.mean) is intertwined with the calculation of the gradients. Therefore, if you're trying to update the code simply by copy-pasting the cost and activation function, you will most likely be better off using np.mean(-np.sum(targets * np.log( outputs ), axis=1)).

omerarshad commented 8 years ago

So you prefer that i use the old cost function?

On Sun, Apr 17, 2016 at 10:47 PM, Jørgen Grimnes notifications@github.com wrote:

Yes, that's right. In addition to changing the name, the calculation has changed from:

np.mean(-np.sum(targets * np.log( outputs ), axis=1))

to

-np.sum(targets * np.log( outputs ))

The update (removing np.mean) is intertwined with the calculation of the gradients. Therefore, if you're trying to update the code simply by copy-pasting the cost and activation function, you will most likely be better off using np.mean(-np.sum(targets * np.log( outputs ), axis=1)).

— You are receiving this because you were mentioned. Reply to this email directly or view it on GitHub https://github.com/jorgenkg/python-neural-network/issues/9#issuecomment-211069157

jorgenkg commented 8 years ago

Yes, I believe you should stick to the old version.

omerarshad commented 8 years ago

but if i can get good results then i can change the whole code even. Will it really ?

On Sun, Apr 17, 2016 at 10:51 PM, Jørgen Grimnes notifications@github.com wrote:

Yes, I believe you should stick to the old version.

— You are receiving this because you were mentioned. Reply to this email directly or view it on GitHub https://github.com/jorgenkg/python-neural-network/issues/9#issuecomment-211070287

jorgenkg commented 8 years ago

The only way I can guarantee that the calculations are correct, will be if the code is updated with the new files.

I'm hoping everything works out for you!

omerarshad commented 8 years ago

Yes i am ready to change all files, hoping that it will work better than before

On Sun, Apr 17, 2016 at 10:56 PM, Jørgen Grimnes notifications@github.com wrote:

The only way I can guarantee that the calculations are correct, will be if the code is updated with the new files.

I'm hoping everything works out for you!

— You are receiving this because you were mentioned. Reply to this email directly or view it on GitHub https://github.com/jorgenkg/python-neural-network/issues/9#issuecomment-211072316

omerarshad commented 8 years ago

You wrote a first_layer_deltas code? did u removed that?

On Sun, Apr 17, 2016 at 10:56 PM, omer arshad omer.arshad20@gmail.com wrote:

Yes i am ready to change all files, hoping that it will work better than before

On Sun, Apr 17, 2016 at 10:56 PM, Jørgen Grimnes <notifications@github.com

wrote:

The only way I can guarantee that the calculations are correct, will be if the code is updated with the new files.

I'm hoping everything works out for you!

— You are receiving this because you were mentioned. Reply to this email directly or view it on GitHub https://github.com/jorgenkg/python-neural-network/issues/9#issuecomment-211072316