Open eswears opened 9 years ago
I don’t have the data with me tonight, but I can e-mail them to you in the morning. The edge features were originally a wide range of integer values, representing the number of characteristics that the two nodes have in common. So, my guess was that the feature space was not well represented by the limited number of training examples. This resulted in the f(x)=-inf during learning.
So, I made the edge features binary and things improved; the weights changed with each iteration. I might consider clustering the feature space to create some granularity, but if you have any other suggestions after looking at the original data, it will be greatly appreciated.
Thanks, Eran
I'm starting to look into these results a little more. I would like to use more expressive edge features, not just binary features. So, I quantized them into three categories as a test and the L-BFGS optimization process outputs an error similar to the one mentioned in the above. I'll e-mail you the model and data file. Do you have any suggestion on how to get the optimization algorithm to work with edge features that are not binary? The output of the learn_CRF process is:
%*** argc: 7 using default of 10 iters using default algorithm of lbfgs using default (zero) weights here are the model files: ./run_out/training/class1/cvi1/model1.txt here are the data files: ./run_out/training/class1/cvi1/data1.txt reading data...done. reading models and creating message structures...
done. messages built initial weights: W[0]: 0.680375 0.566198 0.823295 -0.329554 -0.444451 -0.0452059 -0.270431 ...
W[1]: -0.0876971 -0.56613 0.959075 -0.5025 -0.404175 0.00369243 -0.86008 -0.525357 0.00969064 0.484195 -0.561394 -0.763765 calling lbfgs: gradnorm: 3216 L-BFGS optimization terminated with status code = 2 fx = -inf, x[0] = 0.680375, x[1] = -0.211234 weights after optimization: W[0]: 0.680375 0.566198 0.823295 -0.329554 -0.444451 ... %**
Got the same error, so I debug the lbgfgs library and understand the place where this error was occurred. It was from https://github.com/chokkan/liblbfgs see 914-917 lines: if (param->max_linesearch <= count) { / Maximum number of iteration. / return LBFGSERR_MAXIMUMLINESEARCH; } As a result, I tried to set max_linesearch=100 (default is 20) in Python interface and it allows to avoid this error and it works for me.
The learning_CRF algorithm produces the same value for gradnorm and the final weights after learning are equal to the initial weights.
The last message that is displayed is: "L-BFGS optimization terminated with status code = -998 fx= -inf, x[0]= 0.680375, x[1]= -0.211234 weights after optimization are:"...
Do you know what may be causing this?
Thanks, Eran