fawda123 / NeuralNetTools

R package of generic neural network tools
https://fawda123.github.io/NeuralNetTools/
Creative Commons Zero v1.0 Universal
72 stars 20 forks source link

Error Message: Error in strsplit(as.character(mod_in$terms[[2]]), " + ", fixed = TRUE)[[1]] : subscript out of bounds #7

Closed amaraabara closed 9 years ago

amaraabara commented 9 years ago

Hi Marcus,

I tried using NeuralNetTools to interpret a Caret Neural Network model but encountered the error stated in the above subject. I'm new to R so the error could well be my fault, however I thought it might be useful to run this by you.

The same error was received for all functions: plotnet(), garson() and lekprofile().

It seems this error is peculiar to the Caret model as the above 3 function seem to work just fine when fed with a nnet model.

I've also included the model structure below just in case it's useful in identifying the issue. If you require any further information, please let me know.

Btw, awesome job on the package!

Cheers, Amara.

Caret model

library(nnet)
nnetMod <- train(fgdDataTAvg2TrainXD, fgdDataTAvg2TrainY,
                     method = "avNNet",
                     tuneGrid = nnetGrid,
                     preProc = c("center", "scale"),
                     maxit = 500,
                     linout = TRUE,
                     trace = FALSE)

Code and error for plotnet (same error encountered for garson and lekprofile)

plotnet(nnetMod)
Error in strsplit(as.character(mod_in$terms[[2]]), " + ", fixed = TRUE)[[1]] : 
  subscript out of bounds

Caret model structure

str(nnetMod)
List of 19
 $ method      : chr "avNNet"
 $ modelInfo   :List of 13
  ..$ label     : chr "Model Averaged Neural Network"
  ..$ library   : chr "nnet"
  ..$ loop      : NULL
  ..$ type      : chr [1:2] "Classification" "Regression"
  ..$ parameters:'data.frame':  3 obs. of  3 variables:
  .. ..$ parameter: Factor w/ 3 levels "bag","decay",..: 3 2 1
  .. ..$ class    : Factor w/ 2 levels "logical","numeric": 2 2 1
  .. ..$ label    : Factor w/ 3 levels "#Hidden Units",..: 1 3 2
  ..$ grid      :function (x, y, len = NULL)  
  .. ..- attr(*, "srcref")=Class 'srcref'  atomic [1:8] 8 26 10 76 26 76 8 10
  .. .. .. ..- attr(*, "srcfile")=Classes 'srcfilecopy', 'srcfile' <environment: 0x117de7e78> 
  ..$ fit       :function (x, y, wts, param, lev, last, classProbs, ...)  
  .. ..- attr(*, "srcref")=Class 'srcref'  atomic [1:8] 11 25 30 19 25 19 11 30
  .. .. .. ..- attr(*, "srcfile")=Classes 'srcfilecopy', 'srcfile' <environment: 0x117de7e78> 
  ..$ predict   :function (modelFit, newdata, submodels = NULL)  
  .. ..- attr(*, "srcref")=Class 'srcref'  atomic [1:8] 31 29 40 19 29 19 31 40
  .. .. .. ..- attr(*, "srcfile")=Classes 'srcfilecopy', 'srcfile' <environment: 0x117de7e78> 
  ..$ prob      :function (modelFit, newdata, submodels = NULL)  
  .. ..- attr(*, "srcref")=Class 'srcref'  atomic [1:8] 41 26 49 19 26 19 41 49
  .. .. .. ..- attr(*, "srcfile")=Classes 'srcfilecopy', 'srcfile' <environment: 0x117de7e78> 
  ..$ predictors:function (x, ...)  
  .. ..- attr(*, "srcref")=Class 'srcref'  atomic [1:8] 50 32 50 55 32 55 50 50
  .. .. .. ..- attr(*, "srcfile")=Classes 'srcfilecopy', 'srcfile' <environment: 0x117de7e78> 
  ..$ levels    :function (x)  
  .. ..- attr(*, "srcref")=Class 'srcref'  atomic [1:8] 51 28 51 55 28 55 51 51
  .. .. .. ..- attr(*, "srcfile")=Classes 'srcfilecopy', 'srcfile' <environment: 0x117de7e78> 
  ..$ tags      : chr [1:4] "Neural Network" "Ensemble Model" "Bagging" "L2 Regularization"
  ..$ sort      :function (x)  
  .. ..- attr(*, "srcref")=Class 'srcref'  atomic [1:8] 53 26 53 64 26 64 53 53
  .. .. .. ..- attr(*, "srcfile")=Classes 'srcfilecopy', 'srcfile' <environment: 0x117de7e78> 
 $ modelType   : chr "Regression"
 $ results     :'data.frame':   1 obs. of  7 variables:
  ..$ decay     : num 0.01
  ..$ bag       : logi FALSE
  ..$ size      : num 2
  ..$ RMSE      : num 7.55
  ..$ Rsquared  : num 0.98
  ..$ RMSESD    : num 2.2
  ..$ RsquaredSD: num 0.0112
 $ pred        : NULL
 $ bestTune    :'data.frame':   1 obs. of  3 variables:
  ..$ size : num 2
  ..$ decay: num 0.01
  ..$ bag  : logi FALSE
 $ call        : language train.default(x = fgdDataTAvg2TrainXD, y = fgdDataTAvg2TrainY, method = "avNNet", preProcess = c("center",      "scale"), maxit = 500, linout = TRUE, trace = FALSE, tuneGrid = nnetGrid)
 $ dots        :List of 3
  ..$ maxit : num 500
  ..$ linout: logi TRUE
  ..$ trace : logi FALSE
 $ metric      : chr "RMSE"
 $ control     :List of 23
  ..$ method           : chr "boot"
  ..$ number           : num 25
  ..$ repeats          : num 25
  ..$ p                : num 0.75
  ..$ initialWindow    : NULL
  ..$ horizon          : num 1
  ..$ fixedWindow      : logi TRUE
  ..$ verboseIter      : logi FALSE
  ..$ returnData       : logi TRUE
  ..$ returnResamp     : chr "final"
  ..$ savePredictions  : logi FALSE
  ..$ classProbs       : logi FALSE
  ..$ summaryFunction  :function (data, lev = NULL, model = NULL)  
  ..$ selectionFunction: chr "best"
  ..$ preProcOptions   :List of 3
  .. ..$ thresh : num 0.95
  .. ..$ ICAcomp: num 3
  .. ..$ k      : num 5
  ..$ index            :List of 25
  .. ..$ Resample01: int [1:26180] 1 2 2 3 6 7 8 8 8 9 ...
  .. ..$ Resample02: int [1:26180] 2 4 4 4 5 5 6 6 7 11 ...
  .. ..$ Resample03: int [1:26180] 4 5 8 10 10 10 11 11 12 12 ...
  .. ..$ Resample04: int [1:26180] 1 3 3 6 7 8 8 8 9 10 ...
  .. ..$ Resample05: int [1:26180] 1 1 6 8 9 10 12 12 13 13 ...
  .. ..$ Resample06: int [1:26180] 1 1 1 1 5 7 7 9 9 9 ...
  .. ..$ Resample07: int [1:26180] 3 4 4 6 7 8 9 9 10 10 ...
  .. ..$ Resample08: int [1:26180] 1 2 4 4 4 5 7 8 11 11 ...
  .. ..$ Resample09: int [1:26180] 1 2 4 6 7 8 8 9 9 10 ...
  .. ..$ Resample10: int [1:26180] 1 3 4 8 9 9 9 11 15 17 ...
  .. ..$ Resample11: int [1:26180] 2 2 2 3 4 4 6 6 6 8 ...
  .. ..$ Resample12: int [1:26180] 1 2 4 7 7 7 9 10 11 11 ...
  .. ..$ Resample13: int [1:26180] 2 2 2 3 3 4 7 8 10 11 ...
  .. ..$ Resample14: int [1:26180] 1 3 4 5 5 5 6 6 6 7 ...
  .. ..$ Resample15: int [1:26180] 1 2 3 5 6 10 12 13 13 14 ...
  .. ..$ Resample16: int [1:26180] 1 2 4 5 5 6 9 9 9 11 ...
  .. ..$ Resample17: int [1:26180] 1 1 3 3 5 7 9 10 10 10 ...
  .. ..$ Resample18: int [1:26180] 2 3 4 5 6 7 7 7 8 8 ...
  .. ..$ Resample19: int [1:26180] 1 1 2 3 4 5 5 6 6 7 ...
  .. ..$ Resample20: int [1:26180] 1 3 4 6 6 7 7 8 10 11 ...
  .. ..$ Resample21: int [1:26180] 1 2 3 4 4 6 7 8 8 8 ...
  .. ..$ Resample22: int [1:26180] 1 1 2 2 2 3 3 4 6 7 ...
  .. ..$ Resample23: int [1:26180] 3 3 3 3 4 5 6 8 8 9 ...
  .. ..$ Resample24: int [1:26180] 1 1 5 5 6 7 7 8 10 11 ...
  .. ..$ Resample25: int [1:26180] 1 1 3 3 3 4 4 5 6 7 ...
  ..$ indexOut         :List of 25
  .. ..$ Resample01: int [1:9680] 4 5 10 11 15 18 22 23 25 27 ...
  .. ..$ Resample02: int [1:9583] 1 3 8 9 10 12 17 19 23 24 ...
  .. ..$ Resample03: int [1:9616] 1 2 3 6 7 9 14 22 23 24 ...
  .. ..$ Resample04: int [1:9634] 2 4 5 16 21 22 23 26 28 29 ...
  .. ..$ Resample05: int [1:9637] 2 3 4 5 7 11 14 15 16 18 ...
  .. ..$ Resample06: int [1:9644] 2 3 4 6 8 11 14 15 17 18 ...
  .. ..$ Resample07: int [1:9599] 1 2 5 13 14 18 20 27 28 31 ...
  .. ..$ Resample08: int [1:9611] 3 6 9 10 15 19 23 24 26 27 ...
  .. ..$ Resample09: int [1:9592] 3 5 12 14 15 16 17 22 23 27 ...
  .. ..$ Resample10: int [1:9664] 2 5 6 7 10 12 13 14 16 20 ...
  .. ..$ Resample11: int [1:9591] 1 5 7 11 14 16 18 21 25 27 ...
  .. ..$ Resample12: int [1:9661] 3 5 6 8 18 19 20 24 29 32 ...
  .. ..$ Resample13: int [1:9636] 1 5 6 9 13 21 22 23 25 29 ...
  .. ..$ Resample14: int [1:9640] 2 10 11 12 18 19 20 22 24 25 ...
  .. ..$ Resample15: int [1:9545] 4 7 8 9 11 17 18 19 26 29 ...
  .. ..$ Resample16: int [1:9756] 3 7 8 10 12 13 14 19 22 23 ...
  .. ..$ Resample17: int [1:9713] 2 4 6 8 12 13 16 19 24 28 ...
  .. ..$ Resample18: int [1:9635] 1 10 11 12 13 15 20 27 30 32 ...
  .. ..$ Resample19: int [1:9724] 12 15 17 20 23 24 26 27 28 33 ...
  .. ..$ Resample20: int [1:9656] 2 5 9 18 22 25 31 32 34 35 ...
  .. ..$ Resample21: int [1:9646] 5 9 13 14 15 16 19 20 25 27 ...
  .. ..$ Resample22: int [1:9667] 5 8 10 14 24 25 32 33 35 36 ...
  .. ..$ Resample23: int [1:9573] 1 2 7 15 18 19 23 24 27 28 ...
  .. ..$ Resample24: int [1:9671] 2 3 4 9 13 14 15 16 17 18 ...
  .. ..$ Resample25: int [1:9599] 2 8 14 15 22 24 28 30 32 35 ...
  ..$ timingSamps      : num 0
  ..$ predictionBounds : logi [1:2] FALSE FALSE
  ..$ seeds            : logi NA
  ..$ adaptive         :List of 4
  .. ..$ min     : num 5
  .. ..$ alpha   : num 0.05
  .. ..$ method  : chr "gls"
  .. ..$ complete: logi TRUE
  ..$ allowParallel    : logi TRUE
  ..$ yLimits          : num [1:2] -19.5 460.8
 $ finalModel  :List of 12
  ..$ model      :List of 5
  .. ..$ :List of 15
  .. .. ..$ n            : num [1:3] 22 2 1
  .. .. ..$ nunits       : int 26
  .. .. ..$ nconn        : num [1:27] 0 0 0 0 0 0 0 0 0 0 ...
  .. .. ..$ conn         : num [1:49] 0 1 2 3 4 5 6 7 8 9 ...
  .. .. ..$ nsunits      : num 25
  .. .. ..$ decay        : num 0.01
  .. .. ..$ entropy      : logi FALSE
  .. .. ..$ softmax      : logi FALSE
  .. .. ..$ censored     : logi FALSE
  .. .. ..$ value        : num 815895
  .. .. ..$ wts          : num [1:49] 1.72833 -0.036 0.03543 -0.00948 0.02307 ...
  .. .. ..$ convergence  : int 1
  .. .. ..$ fitted.values: num [1:26180, 1] 191 193 199 168 133 ...
  .. .. .. ..- attr(*, "dimnames")=List of 2
  .. .. .. .. ..$ : chr [1:26180] "1" "2" "3" "4" ...
  .. .. .. .. ..$ : NULL
  .. .. ..$ residuals    : num [1:26180, 1] -0.427 -2.121 -1.062 0.472 2.876 ...
  .. .. .. ..- attr(*, "dimnames")=List of 2
  .. .. .. .. ..$ : chr [1:26180] "1" "2" "3" "4" ...
  .. .. .. .. ..$ : NULL
  .. .. ..$ call         : language nnet.default(x = x[ind, , drop = FALSE], y = y[ind], weights = ..1, size = ..2, linout = TRUE, decay = ..3,      maxit = 500, trace = FALSE)
  .. .. ..- attr(*, "class")= chr "nnet"
  .. ..$ :List of 15
  .. .. ..$ n            : num [1:3] 22 2 1
  .. .. ..$ nunits       : int 26
  .. .. ..$ nconn        : num [1:27] 0 0 0 0 0 0 0 0 0 0 ...
  .. .. ..$ conn         : num [1:49] 0 1 2 3 4 5 6 7 8 9 ...
  .. .. ..$ nsunits      : num 25
  .. .. ..$ decay        : num 0.01
  .. .. ..$ entropy      : logi FALSE
  .. .. ..$ softmax      : logi FALSE
  .. .. ..$ censored     : logi FALSE
  .. .. ..$ value        : num 940886
  .. .. ..$ wts          : num [1:49] 0.16273 -0.02935 0.01564 -0.00909 0.00617 ...
  .. .. ..$ convergence  : int 1
  .. .. ..$ fitted.values: num [1:26180, 1] 191 194 200 169 133 ...
  .. .. .. ..- attr(*, "dimnames")=List of 2
  .. .. .. .. ..$ : chr [1:26180] "1" "2" "3" "4" ...
  .. .. .. .. ..$ : NULL
  .. .. ..$ residuals    : num [1:26180, 1] -0.54 -3.635 -1.918 -0.413 2.509 ...
  .. .. .. ..- attr(*, "dimnames")=List of 2
  .. .. .. .. ..$ : chr [1:26180] "1" "2" "3" "4" ...
  .. .. .. .. ..$ : NULL
  .. .. ..$ call         : language nnet.default(x = x[ind, , drop = FALSE], y = y[ind], weights = ..1, size = ..2, linout = TRUE, decay = ..3,      maxit = 500, trace = FALSE)
  .. .. ..- attr(*, "class")= chr "nnet"
  .. ..$ :List of 15
  .. .. ..$ n            : num [1:3] 22 2 1
  .. .. ..$ nunits       : int 26
  .. .. ..$ nconn        : num [1:27] 0 0 0 0 0 0 0 0 0 0 ...
  .. .. ..$ conn         : num [1:49] 0 1 2 3 4 5 6 7 8 9 ...
  .. .. ..$ nsunits      : num 25
  .. .. ..$ decay        : num 0.01
  .. .. ..$ entropy      : logi FALSE
  .. .. ..$ softmax      : logi FALSE
  .. .. ..$ censored     : logi FALSE
  .. .. ..$ value        : num 957342
  .. .. ..$ wts          : num [1:49] -0.02359 -0.02901 0.01685 -0.00894 0.00563 ...
  .. .. ..$ convergence  : int 1
  .. .. ..$ fitted.values: num [1:26180, 1] 191 194 200 169 133 ...
  .. .. .. ..- attr(*, "dimnames")=List of 2
  .. .. .. .. ..$ : chr [1:26180] "1" "2" "3" "4" ...
  .. .. .. .. ..$ : NULL
  .. .. ..$ residuals    : num [1:26180, 1] -0.639 -3.779 -2.025 -0.503 2.489 ...
  .. .. .. ..- attr(*, "dimnames")=List of 2
  .. .. .. .. ..$ : chr [1:26180] "1" "2" "3" "4" ...
  .. .. .. .. ..$ : NULL
  .. .. ..$ call         : language nnet.default(x = x[ind, , drop = FALSE], y = y[ind], weights = ..1, size = ..2, linout = TRUE, decay = ..3,      maxit = 500, trace = FALSE)
  .. .. ..- attr(*, "class")= chr "nnet"
  .. ..$ :List of 15
  .. .. ..$ n            : num [1:3] 22 2 1
  .. .. ..$ nunits       : int 26
  .. .. ..$ nconn        : num [1:27] 0 0 0 0 0 0 0 0 0 0 ...
  .. .. ..$ conn         : num [1:49] 0 1 2 3 4 5 6 7 8 9 ...
  .. .. ..$ nsunits      : num 25
  .. .. ..$ decay        : num 0.01
  .. .. ..$ entropy      : logi FALSE
  .. .. ..$ softmax      : logi FALSE
  .. .. ..$ censored     : logi FALSE
  .. .. ..$ value        : num 970311
  .. .. ..$ wts          : num [1:49] 77.994 0.835 7.987 -5.146 4.102 ...
  .. .. ..$ convergence  : int 1
  .. .. ..$ fitted.values: num [1:26180, 1] 191 189 197 167 133 ...
  .. .. .. ..- attr(*, "dimnames")=List of 2
  .. .. .. .. ..$ : chr [1:26180] "1" "2" "3" "4" ...
  .. .. .. .. ..$ : NULL
  .. .. ..$ residuals    : num [1:26180, 1] 0.221 1.7 0.322 1.663 2.74 ...
  .. .. .. ..- attr(*, "dimnames")=List of 2
  .. .. .. .. ..$ : chr [1:26180] "1" "2" "3" "4" ...
  .. .. .. .. ..$ : NULL
  .. .. ..$ call         : language nnet.default(x = x[ind, , drop = FALSE], y = y[ind], weights = ..1, size = ..2, linout = TRUE, decay = ..3,      maxit = 500, trace = FALSE)
  .. .. ..- attr(*, "class")= chr "nnet"
  .. ..$ :List of 15
  .. .. ..$ n            : num [1:3] 22 2 1
  .. .. ..$ nunits       : int 26
  .. .. ..$ nconn        : num [1:27] 0 0 0 0 0 0 0 0 0 0 ...
  .. .. ..$ conn         : num [1:49] 0 1 2 3 4 5 6 7 8 9 ...
  .. .. ..$ nsunits      : num 25
  .. .. ..$ decay        : num 0.01
  .. .. ..$ entropy      : logi FALSE
  .. .. ..$ softmax      : logi FALSE
  .. .. ..$ censored     : logi FALSE
  .. .. ..$ value        : num 924372
  .. .. ..$ wts          : num [1:49] -0.954 0.0898 -0.069 0.0323 -0.0166 ...
  .. .. ..$ convergence  : int 1
  .. .. ..$ fitted.values: num [1:26180, 1] 191 194 199 169 133 ...
  .. .. .. ..- attr(*, "dimnames")=List of 2
  .. .. .. .. ..$ : chr [1:26180] "1" "2" "3" "4" ...
  .. .. .. .. ..$ : NULL
  .. .. ..$ residuals    : num [1:26180, 1] -0.2539 -3.1718 -1.5553 -0.0262 2.7325 ...
  .. .. .. ..- attr(*, "dimnames")=List of 2
  .. .. .. .. ..$ : chr [1:26180] "1" "2" "3" "4" ...
  .. .. .. .. ..$ : NULL
  .. .. ..$ call         : language nnet.default(x = x[ind, , drop = FALSE], y = y[ind], weights = ..1, size = ..2, linout = TRUE, decay = ..3,      maxit = 500, trace = FALSE)
  .. .. ..- attr(*, "class")= chr "nnet"
  ..$ repeats    : num 5
  ..$ bag        : logi FALSE
  ..$ names      : chr [1:22] "PhValue" "Power" "DensityAbsorber" "HeightAbsorber" ...
  ..$ terms      :Classes 'terms', 'formula' length 3 .outcome ~ PhValue + Power + DensityAbsorber + HeightAbsorber + ZusatzwasserTropfab + GypsumSlurry +      MengeKaSu + MengeKaSu2 + Pumps + L.G + SAG + Sulfur + Calcium + Magnesium + Iron + Ash + Hu +  ...
  .. .. ..- attr(*, "variables")= language list(.outcome, PhValue, Power, DensityAbsorber, HeightAbsorber, ZusatzwasserTropfab, GypsumSlurry,      MengeKaSu, MengeKaSu2, Pumps, L.G, SAG, Sulfur, Calcium, Magnesium, Iron, Ash, Hu, Sodium, DustAfterAbsorber,  ...
  .. .. ..- attr(*, "factors")= int [1:23, 1:22] 0 1 0 0 0 0 0 0 0 0 ...
  .. .. .. ..- attr(*, "dimnames")=List of 2
  .. .. .. .. ..$ : chr [1:23] ".outcome" "PhValue" "Power" "DensityAbsorber" ...
  .. .. .. .. ..$ : chr [1:22] "PhValue" "Power" "DensityAbsorber" "HeightAbsorber" ...
  .. .. ..- attr(*, "term.labels")= chr [1:22] "PhValue" "Power" "DensityAbsorber" "HeightAbsorber" ...
  .. .. ..- attr(*, "order")= int [1:22] 1 1 1 1 1 1 1 1 1 1 ...
  .. .. ..- attr(*, "intercept")= int 1
  .. .. ..- attr(*, "response")= int 1
  .. .. ..- attr(*, ".Environment")=<environment: 0x127d67600> 
  .. .. ..- attr(*, "predvars")= language list(.outcome, PhValue, Power, DensityAbsorber, HeightAbsorber, ZusatzwasserTropfab, GypsumSlurry,      MengeKaSu, MengeKaSu2, Pumps, L.G, SAG, Sulfur, Calcium, Magnesium, Iron, Ash, Hu, Sodium, DustAfterAbsorber,  ...
  .. .. ..- attr(*, "dataClasses")= Named chr [1:23] "numeric" "numeric" "numeric" "numeric" ...
  .. .. .. ..- attr(*, "names")= chr [1:23] ".outcome" "PhValue" "Power" "DensityAbsorber" ...
  ..$ coefnames  : chr [1:22] "PhValue" "Power" "DensityAbsorber" "HeightAbsorber" ...
  ..$ call       : language avNNet.formula(formula = .outcome ~ ., data = dat, size = param$size, decay = param$decay, maxit = 500,      linout = TRUE, trace = FALSE, bag = param$bag)
  ..$ xlevels    : Named list()
  ..$ xNames     : chr [1:22] "PhValue" "Power" "DensityAbsorber" "HeightAbsorber" ...
  ..$ problemType: chr "Regression"
  ..$ tuneValue  :'data.frame': 1 obs. of  3 variables:
  .. ..$ size : num 2
  .. ..$ decay: num 0.01
  .. ..$ bag  : logi FALSE
  ..$ obsLevels  : logi NA
  ..- attr(*, "class")= chr [1:2] "avNNet.formula" "avNNet"
 $ preProcess  :List of 19
  ..$ call      : chr "scrubed"
  ..$ dim       : int [1:2] 26180 22
  ..$ bc        : NULL
  ..$ yj        : NULL
  ..$ et        : NULL
  ..$ mean      : Named num [1:22] 5.21 634.83 1.08 10.04 62.84 ...
  .. ..- attr(*, "names")= chr [1:22] "PhValue" "Power" "DensityAbsorber" "HeightAbsorber" ...
  ..$ std       : Named num [1:22] 0.3195 70.8779 0.0137 0.3885 33.636 ...
  .. ..- attr(*, "names")= chr [1:22] "PhValue" "Power" "DensityAbsorber" "HeightAbsorber" ...
  ..$ ranges    : NULL
  ..$ rotation  : NULL
  ..$ method    : chr [1:2] "center" "scale"
  ..$ thresh    : num 0.95
  ..$ pcaComp   : NULL
  ..$ numComp   : NULL
  ..$ ica       : NULL
  ..$ k         : num 5
  ..$ knnSummary:function (x, ...)  
  ..$ bagImp    : NULL
  ..$ median    : NULL
  ..$ data      : NULL
  ..- attr(*, "class")= chr "preProcess"
 $ trainingData:'data.frame':   26180 obs. of  23 variables:
  ..$ PhValue              : num [1:26180] 5.07 5.87 5.05 5 5.08 ...
  ..$ Power                : num [1:26180] 665 666 670 669 669 ...
  ..$ DensityAbsorber      : num [1:26180] 1.07 1.05 1.08 1.08 1.08 ...
  ..$ HeightAbsorber       : num [1:26180] 9.88 9.84 9.81 9.8 9.8 ...
  ..$ ZusatzwasserTropfab  : num [1:26180] 0.275 0.277 0.279 0.282 0.284 ...
  ..$ GypsumSlurry         : num [1:26180] 129 129 128 128 127 ...
  ..$ MengeKaSu            : num [1:26180] 9.37 9.64 25.87 35.19 35.18 ...
  ..$ MengeKaSu2           : num [1:26180] 9.49 9.7 25.48 32.33 32.29 ...
  ..$ Pumps                : int [1:26180] 4 4 4 4 4 4 4 4 4 4 ...
  ..$ L.G                  : num [1:26180] 12.4 12.4 12.2 12.2 12.2 ...
  ..$ SAG                  : num [1:26180] 0.913 0.913 0.911 0.921 0.936 ...
  ..$ Sulfur               : num [1:26180] 0.416 0.399 0.399 0.399 0.389 ...
  ..$ Calcium              : num [1:26180] 0.764 0.786 0.786 0.786 0.796 ...
  ..$ Magnesium            : num [1:26180] 0.143 0.141 0.141 0.141 0.14 ...
  ..$ Iron                 : num [1:26180] 0.336 0.334 0.334 0.334 0.33 ...
  ..$ Ash                  : num [1:26180] 6.09 6.12 6.12 6.12 6.1 ...
  ..$ Hu                   : num [1:26180] 8658 8557 8557 8557 8507 ...
  ..$ Sodium               : num [1:26180] 0.0422 0.0412 0.0412 0.0412 0.0404 ...
  ..$ DustAfterAbsorber    : num [1:26180] 6.11 7.2 6.25 7.13 6.53 ...
  ..$ DustBeforeAbsorber   : num [1:26180] 18.2 21.5 18.2 21 18.3 ...
  ..$ SulfurStreamBeforeFGD: num [1:26180] 1176299 1184163 1210327 1168576 1152878 ...
  ..$ Potassium            : num [1:26180] 0.027 0.0251 0.0251 0.0251 0.0241 ...
  ..$ .outcome             : num [1:26180] 191 190 198 169 136 ...
 $ resample    :'data.frame':   25 obs. of  3 variables:
  ..$ RMSE    : num [1:25] 6.91 10.26 5.22 7.5 9.7 ...
  ..$ Rsquared: num [1:25] 0.985 0.966 0.992 0.982 0.97 ...
  ..$ Resample: chr [1:25] "Resample01" "Resample02" "Resample03" "Resample04" ...
 $ resampledCM : NULL
 $ perfNames   : chr [1:2] "RMSE" "Rsquared"
 $ maximize    : logi FALSE
 $ yLimits     : num [1:2] -19.5 460.8
 $ times       :List of 3
  ..$ everything:Class 'proc_time'  Named num [1:5] 1737.9 15.1 1773.8 0 0
  .. .. ..- attr(*, "names")= chr [1:5] "user.self" "sys.self" "elapsed" "user.child" ...
  ..$ final     :Class 'proc_time'  Named num [1:5] 74.625 0.335 75.231 0 0
  .. .. ..- attr(*, "names")= chr [1:5] "user.self" "sys.self" "elapsed" "user.child" ...
  ..$ prediction: logi [1:3] NA NA NA
 - attr(*, "class")= chr "train"
fawda123 commented 9 years ago

Hi Amara,

You're using a model averaging method from the train function that averages multiple nnet models to create the final output. As far as I can tell, the output doesn't include enough information to use the plotnet, garson, or olden functions from my package, nor would it be appropriate since it's a combined model. I would suggest recreating the individual models that were used to make the averaged models, then look at each model separately. I would imagine they are not very different. The output from the train function does include these models but unfortunately some of the variables needed to use the NeuralNetTools functions are tied up in a temporary environment that I couldn't access. Try this code instead, it isolates individual models from the output, recreates them, then uses the NeuralNetTools functions. Hope that helps...

library(NeuralNetTools)
library(caret)

mod <- train(Y1 ~ X1 + X2 + X3, method = 'avNNet', data = neuraldat,
  linout = TRUE)

allmods <- mod$finalModel$model

mod <- allmods[[1]] # first model, change this number to look at the other models
wts <- mod$wts
decay <- mod$decay
struct <- mod$n

# recreate
recmod <- nnet(Y1 ~ X1 + X2 + X3, data = neuraldat, Wts = wts, decay = decay, 
  size = struct[2], maxit = 0)

# use the functions to look at the individual model
plotnet(recmod)
garson(recmod)
olden(recmod, 'Y1')
lekprofile(recmod)
amaraabara commented 9 years ago

Hey Marcus,

Thanks for your response - the code you posted worked fine. Although, I seem to have a problem with lekprofile(). The plot is pretty much blank apart from one variable (see plot below). Do you think this might have something to do with the data or do I need to include additional arguments in the function?

Also, I noticed that the lek.fun() function allows you to select specific variables to plot using var.sens. Does lekprofile() allow you to do this? Based on the manual, it doesn't seem so but just wanted to check.

plotnet

olden

lekprofile

fawda123 commented 9 years ago

Hi Amara,

Could you please provide a reproducible example that illustrates the problem? This is hard to solve if I can't reproduce the problem on my end.

Also, you are right about the difference between lekprofile vs. lek.fun. I don't remember why I removed that feature... However, you can return the actual values used in the plot by setting the val_out argument to TRUE. These are in long format that you can plot with ggplot. Something like this:

library(NeuralNetTools)
library(nnet)
library(dplyr)
library(ggplot2)

set.seed(123)

mod <- nnet(Y1 ~ X1 + X2 + X3, data = neuraldat, size = 5)

vals <- lekprofile(mod, val_out = TRUE)

toplo <- filter(vals, exp_name %in% c('X1', 'X3'))

ggplot(toplo, aes(x = Explanatory, y = Response, group = Splits, colour = Splits)) + 
  geom_line() + 
  facet_grid(resp_name ~ exp_name)

This is tedious though. You could open another issue with this request if you want this feature back in the function.

Best,

Marcus

amaraabara commented 9 years ago

Hi Marcus,

Apologies for the delay in sending this across to you. Please find below a sample of the code I used to produce the charts above. Here's a link to the dataset I used as well: https://www.dropbox.com/s/l6it7mq5bo0aa6l/FGD_Data_FY_TAvg.csv?dl=0. Any assistance in figuring out what the issue might be would be much appreciated.

I think it would be really cool to have the feature in lek.fun which allows you to plot sensitivity analysis for selected variables in lekprofile. I'll go ahead and raise a new issue as suggested, hopefully it isn't too much of a hassle to have it included.

Thanks, Amara.

fgdDataTAvg <- read.csv("FGD_Data_FY_TAvg.csv", stringsAsFactors=FALSE)

fgdDataTAvg1 <- na.omit(fgdDataTAvg)

library(lubridate)

# Converting the DateTime variable into POSIXct format
fgdDataTAvg1$DateTime <- dmy_hms(fgdDataTAvg1$DateTime, truncated = 3)

# Extracting training set from data 
fgdDataTAvg2Train <- filter(fgdDataTAvg2, DateTime > "2014-03-01 00:00:00" & DateTime 
                            < "2014-07-01 00:00:00")

# Removing AdipicAcid, SAG, SO2AfterFGD and DateTime from the training set
fgdDataTAvg2TrainY <- fgdDataTAvg2Train$SO2AfterFGD
fgdDataTAvg2TrainX <- fgdDataTAvg2Train[,-17]
fgdDataTAvg2TrainX <- fgdDataTAvg2TrainX[,-16]
fgdDataTAvg2TrainX <- fgdDataTAvg2TrainX[,-8]
fgdDataTAvg2TrainX <- fgdDataTAvg2TrainX[,-1]

# Checking for in-between predictor correlations
library(caret)
corThresh <- 0.75
tooHigh <- findCorrelation(cor(fgdDataTAvg2TrainX), corThresh)
names(fgdDataTAvg2TrainX)[tooHigh]

drops <- c("Suspension","SO2BeforeFGD","FlueGasFlowBeforeFGD","Carbon","Silicon","Aluminium")
fgdDataTAvg2TrainXD <- fgdDataTAvg2TrainX[, !(names(fgdDataTAvg2TrainX) %in% drops)]

nnetGrid <- expand.grid(.decay = c(0.01),
                        .bag = FALSE,
                        .size = c(2))

set.seed(2)

library(nnet)
system.time(NNmod <- train(fgdDataTAvg2TrainXD, fgdDataTAvg2TrainY,
                            method = "avNNet",
                            tuneGrid = nnetGrid,
                            preProc = c("center", "scale"),
                            maxit = 500,
                            linout = TRUE,
                            trace = FALSE))

# Model extraction to enable neural plots
mod <- NNmod
allmods <- mod$finalModel$model

mod <- allmods[[1]] # first model, change this number to look at the other models
wts <- mod$wts
decay <- mod$decay
struct <- mod$n

# recreate
Y <- as.data.frame(fgdDataTAvg2TrainY)
recmod <- nnet(fgdDataTAvg2TrainXD, Y, Wts = wts, decay = decay, size = struct[2], maxit = 0)
names(fgdDataTAvg2TrainXD) 

# use the functions to look at the individual model
library(NeuralNetTools)
plotnet(recmod)
garson(recmod)    
olden(recmod, 'Y')
lekprofile(recmod)
fawda123 commented 9 years ago

Hi Amara,

You're missing some steps when you recreate a single model from the total model list. Pay attention to the arguments you used in your initial call to train. Make sure you use the same ones when you recreate the model with nnet. Specifically, you forgot to set a linear output and to scale/center the input variables. Try it again with these lines for the model recreation.

X <- scale(fgdDataTAvg2TrainXD)
X <- as.data.frame(X)
recmod <- nnet(X, Y, Wts = wts, decay = decay, size = struct[2], 
               maxit = 0, linout = T)
amaraabara commented 9 years ago

Thanks for your help Marcus, it worked!