Open nettoyoussef opened 4 years ago
We have a workaround for the problem passing the names of the features, as exposed here:
importance <- xgb.importance(model = model$models[[1]], feature_names = colnames(features))
But I still find that it would be advisable to correct the original problem.
Sorry for the delay.
Note to myself: Need to store the feature names into booster after UBJSON is merged. This line sets the feature names for booster: https://github.com/dmlc/xgboost/blob/e94b76631035cd8b3a5cdd0c883225f069e74686/R-package/R/xgb.train.R#L380
Hi all,
Long fan of your efforts with the Xgboost algorithm/implementation. It is super fast and memory-friendly.
I found a problem when trying to see feature importance when using the
xgb.cv
function, namely that it doesn't return the features names when using the callbackcb.cv.predict(save_models = TRUE)
.I found this trying to plot the model importance using
xgb.plot.importance
. Does the numbers refer to the python way of counting columns (i.e., starting from 0)?I made an MRE below:
Xgboost version: xgboost_0.90.0.2 (R package)