Open mahatibharadwaj opened 5 years ago
Hi, Do you have your code available somewhere? Also, can you tell me what are the versions of neupy and TensorFlow do you use?
Also, what is your mu_update_factor
value?
tensorflow version = 1.13.2 neupy version = 0.8.2 (not very sure where to check in anaconda environment) mu_update_factor is default value
my code
import numpy as np
np.random.seed(20)
from neupy import algorithms, layers
from neupy.exceptions import StopTraining
from neupy.layers import *
import pandas as pd
import time
from sklearn import preprocessing
data = pd.read_csv("data6k.csv")
train = data.iloc[1:600,1:20]
test = data.iloc[601:671,1:20]
xTrain = train.iloc[:,3:20]
yTrain = train.iloc[:,0]
xTest = test.iloc[:,3:20]
yTest = test.iloc[:,0]
network = join(Input(16), Relu(8), Linear(1))
def on_epoch_end(optimizer):
if optimizer.errors.valid[-1] < 0.001:
raise StopTraining("Training has been interrupted")
start_time = time.time()
optimizer = algorithms.LevenbergMarquardt(network, signals=on_epoch_end,)
optimizer.train(xTrain, yTrain, xTest, yTest)
yPred = optimizer.predict(xTest)
I am getting this error at optimizer.train
Please help me resolve this
please let me know
I think this might require a fix, in the meanwhile, can you try to reduce the mu_update_factor
value from 1.2 to maybe 1.1 or 1.05 and/or you can also try to increase mu
from 0.1 to maybe 0.2 or 0.5 (and maybe even all the way to 1)
It is strange how the same code worked two days back and is giving this issue now. It will be helpful if you could fix it asap and also explain the issue. Thanks
did you try to modify mu
and mu_update_factor
values? did it help to solve your problem?
I am again getting the same error at mu_update_factor=1.1, mu=0.2 This is working for mu_update_factor=1.1, mu=0.1 but predicted values are deviated a lot. Can you please tell me the ideal values for mu_update_factor and mu to avoid this error? My requirement is to get less model training time and deviation of predicted values from actual values should be as low as possible. I am unable to decide ideal mu_update_factor, mu and error threshold value (currently 0.001) as it is not working for many values. Please suggest ideal values as per my requirement. Please help.
Can you please tell me the ideal values for mu_update_factor and mu to avoid this error?
Inversion happens on the jacobian matrix and the mu
parameter is added to each diagonal element of this matrix. This trick helps to break linear dependence between rows/columns in the square matrix. But when mu
is way too large then the training might be less effective since mu introduces a bit of noise. The mu_update_factor
helps to increase or decrease mu
value based on the training performance. mu_update_factor=1
means that there will be no adjustments and large value will mean that small change in the error value can drastically increase or decrease mu
value. After many updates mu
can approach zero, so that's why. I thought that changing this parameters can help to resolve your problem.
Thanks. But different combinations of mu_update_factor, mu and error threshold value are giving this same error. How are these three related and how do we decide how to tune them? Is it still a bug or the user has to decide. Trial and error is a tedious method. Does this also depend on the size of the data set?
But different combinations of mu_update_factor, mu and error threshold value are giving this same error.
Sorry, maybe I misunderstood you, did you say that it worked for mu_update_factor=1.1, mu=0.1
?
This is working for mu_update_factor=1.1, mu=0.1 but predicted values are deviated a lot.
How are these three related and how do we decide how to tune them?
It's important for you to understand algorithm before using it. Please refer to this book in order to learn more about it: https://hagan.okstate.edu/NNDesign.pdf (see Section 12).
Is it still a bug or the user has to decide.
The mu
parameter has to deal with this problem, but for some reason it doesn't. I might need to put a threshold on the minimum mu
value in order to ensure that matrix will remain invertible (but I'm not 100% whether that's the problem that you're experiencing).
Would it be possible for you to set verbose=True
and share outputs that you're observing in the terminal
optimizer = algorithms.LevenbergMarquardt(network, signals=on_epoch_end, verbose=True)
Thanks for the information. I observed another strange thing with the parameters. The same combination of mu, mu_update_factor and error threshold always doesn't give the result. Sometimes it gives this error and sometimes it works. I think this needs to be fixed.
same error. Even after changing mu and mu_update_factor. Main information
[ALGORITHM] LevenbergMarquardt
[OPTION] loss = mse [OPTION] mu = 0.1 [OPTION] mu_update_factor = 1.1 [OPTION] show_epoch = 1 [OPTION] shuffle_data = False [OPTION] signals = None [OPTION] target = Tensor("placeholder/target/linear-1:0", shape=(?, 1), dtype=float32) [OPTION] verbose = True
[TENSORFLOW] Initializing Tensorflow variables and functions. WARNING:tensorflow:From c:\python37\lib\site-packages\tensorflow\python\framework\op_def_library.py:263: colocate_with (from tensorflow.python.framework.ops) is deprecated and will be removed in a future version. Instructions for updating: Colocations handled automatically by placer. [TENSORFLOW] Initialization finished successfully. It took 0.24 seconds
@rdx10001 do you get the same error during the first training iteration or after some number of epochs?
Previously, I did not get any errors and the code ran properly. I even could see the results properly. Now after implementing everything, I want to save my results. For that I am running my code again and facing this new issue now. Please help me resolve this.
Please find the error below.