Closed coryschillaci closed 8 years ago
val (nn,opts)=DNN.learnerX(loadSMat(indir+"trainData000.smat.lz4"),FMat(loadSMat(indir+"trainLabels000.smat.lz4"))); opts.aopts = opts; opts.featType = 1; // (1) feature type, 0=binary, 1=linear opts.addConstFeat = false; // add a constant feature (effectively adds a $\beta_0$ term to $X\beta$) opts.batchSize=500; opts.reg1weight = 0.0001; opts.lrate = 0.2f; opts.texp = 0.4f; opts.npasses = 5; opts.links = iones(132,1); DNN.dlayers(3,100,0.25f,132,opts,2); nn.train;
runs just fine, but if I don't cast the targets as an FMat, i.e.,
val (nn,opts)=DNN.learnerX(loadSMat(indir+"trainData000.smat.lz4"),loadSMat(indir+"trainLabels000.smat.lz4")); opts.aopts = opts; opts.featType = 1; // (1) feature type, 0=binary, 1=linear opts.addConstFeat = false; // add a constant feature (effectively adds a $\beta_0$ term to $X\beta$) opts.batchSize=500; opts.reg1weight = 0.0001; opts.lrate = 0.2f; opts.texp = 0.4f; opts.npasses = 5; opts.links = iones(132,1); DNN.dlayers(3,100,0.25f,132,opts,2); nn.train;
gives
scala> nn.train; pass= 0 scala.MatchError: ( 1 0.59643 1 0.54870 0.96072 1 1 0.97798... 1 0.00054444 4.6897e-12 0.93011 0.99481 0.98594 4.0819e-37 0.99795... 0.99987 1.1189e-14 0.0054939 0.96049 0.96663 0.00032491 0 0.010586... 1.2195e-06 0.00018742 3.1474e-09 0.28522 0.54711 5.9957e-10 0 0.92651... 0.00010903 1 1 0.31402 0.10718 0.016666 1 0.68628... 1 9.1819e-18 0.99999 0.032690 0.99763 1.0000 1 0.94152... .. .. .. .. .. .. .. .. ,( 14, 0) 1 ( 6, 1) 1 ( 40, 2) 1 ( 59, 3) 1 ( 46, 4) 1 ( 1, 5) 1 ( 94, 6) 1 ( 60, 7) 1 ... ... ... ,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1) (of class scala.Tuple3) at BIDMach.models.GLM$.derivs(GLM.scala:475) at BIDMach.networks.DNN$GLMLayer.backward(DNN.scala:347) at BIDMach.networks.DNN$Layer.backward(DNN.scala:230) at BIDMach.networks.DNN.dobatch(DNN.scala:161) at BIDMach.models.Model.dobatchg(Model.scala:101) at BIDMach.Learner.retrain(Learner.scala:87) at BIDMach.Learner.train(Learner.scala:53) ... 33 elided
Fixed now. You can do this for any classifier by applying the "full()" function to its target matrix, which is normally gmats(1).
runs just fine, but if I don't cast the targets as an FMat, i.e.,
gives