Closed fernandafalves closed 3 years ago
Hello, Did you find a solution to this problem or a way to work around it ? I am currently encountering the error message when using train() with 'method = "AdaBoost.M1" '. Thanks in advance for your answer.
In these cases, there is no error (just a warning).
There were missing values in resampled performance measures
This is almost always because some tuning parameter combination produced predictions that are constant for all samples. train()
tries to compute the R^2 and, since it needs a non-zero variance, it produces an NA for that statistic.
I am also having the same problem regressing with XGboost. I am trying to compare its accuracy with other models like RF and SVM. RF and SVMs are producing R^2 values while XGboost is not. Is it considered scientific if we leave R^2 as NA or we have a way around? In addition, I am kind of confused to see some model working while some aren't. Thank you in advance.
I'm having a problem when using the k-fold cross-validation with the Random Forest method using Caret package. Initially, one of the outputs was the error "Error in randomForest.default(x, y, mtry = param$mtry, ...) : Need at least two classes to do classification." However, I already had two classes to do the classification, which are "Normal" and "Failure". When posting this question at https://datascience.stackexchange.com/questions/69660/recommendations-for-statistical-models-given-my-dataset/69686#69686 one of the recommendations was to use the stratified k-fold cross-validation given that my dataset have much more classes "Normal" than "Failure". However, after the implementation of such method, the message "Warning message: In nominalTrainWorkflow(x = x, y = y, wts = weights, info = trainInfo, : There were missing values in resampled performance measures." appears.
Someone could help me?
The R script:
The ouput:
A sample of the data: