Closed yqf3139 closed 7 years ago
Is that with Sparse data?
I know why this happens. It is because (probably) the indices in your data are not sorted.
E.g. if you look at your training file is it like that:?
56:1.0 120:1.0 103212:1.0 241234123:1.0 // e.g. column increase from left to right
or
120:1.0 103212:1.0 56:1.0 241234123:1.0 // e.g. order is not respected?
I will update this, via raising an error. In the mean time (if that is the error) can you just sort the indices before creating the sparse file?
Hi. I checked for a few samples and it seems the column index is right. I used ssp.hstack to add new features horizontally.
can you use this updated function to print your sparse data (with python) ?
import numpy as np
from scipy.sparse import csr_matrix,csc_matrix
def fromsparsetofile(filename, array, deli1=" ", deli2=":",ytarget=None):
zsparse=csr_matrix(csc_matrix(array))
indptr = zsparse.indptr
indices = zsparse.indices
data = zsparse.data
print(" data lenth %d" % (len(data)))
print(" indices lenth %d" % (len(indices)))
print(" indptr lenth %d" % (len(indptr)))
f=open(filename,"w")
counter_row=0
for b in range(0,len(indptr)-1):
#if there is a target, print it else , print nothing
if ytarget!=None:
f.write(str(ytarget[b]) + deli1)
for k in range(indptr[b],indptr[b+1]):
if (k==indptr[b]):
if np.isnan(data[k]):
f.write("%d%s%f" % (indices[k],deli2,-1))
else :
f.write("%d%s%f" % (indices[k],deli2,data[k]))
else :
if np.isnan(data[k]):
f.write("%s%d%s%f" % (deli1,indices[k],deli2,-1))
else :
f.write("%s%d%s%f" % (deli1,indices[k],deli2,data[k]))
f.write("\n")
counter_row+=1
if counter_row%10000==0:
print(" row : %d " % (counter_row))
f.close()
Example:
fromsparsetofile(path + "file.sparse", my_sparse_array, deli1=" ", deli2=":",ytarget=target)
Thanks for your immediate response. I will have a quick check and update later.
Unfortunately, the problem remains the same. I removed two GradientBoostingForest node and the except is not throwed. But one same exception is throwed by RandomForest and it continues running.
Do you need the training file for debugging? If it is pretty sure that the problem is caused by the index, I will use a script to check if there is a broken index in my training file.
Hm. Are you using an "L1" model somewhere in your mix ? These should be put at the end on each level as they mess up the indices - I have discovered that now . Please send me a subset of the data that replicates this . Thank you.
I use the paramsv1.txt so the "L1" problem is not my case? Please use this link to download the subset.
Ok. I have found the error...it is really stupid...I don't know why it happens, but rounding seems to be inconsistent... can you please add rounding:20
to all tree-based models until I fix this?
EDIT: Even that won't fix it . I need to make a new release.
Thanks. I just tried and seems not working either. I will try the new release asap.
@kaz-Anova Hi. Is there any I can help to fix this bug?
Apologies for late response. I am still working on these things. The error in your case was triggered because you have some elements with 'zero' values. like col_index:0.00000 . StackNet are not expecting zero values in the sparse format . However with the newer version I will release, this will be taken care of. Apologies for coming back late again. I did not forget you- it is just I am buried with tasks these days.
Thanks for the help.
I had the same problem, and i don‘t know why
Hi. Thanks for stacknet classifier. I encountered a exception when I try to add some more features to the kaggle quora problem.
I used the paramsv1.txt but add more threads to each base classifier.