Closed Keniajin closed 3 years ago
Additionally, how can I access the feature importances
as a list or dataframe after fitting
`# Define and train GPModel
gp_model = gpb.GPModel(group_data=group)
# create dataset for gpb.train function
data_train = gpb.Dataset(X_train, y_train)
#params = { 'objective': 'binary', 'learning_rate': 0.1,
#'max_depth': 6, 'min_data_in_leaf': 5, 'verbose': 0 }
# Other parameters not contained in the grid of tuning parameters
params = { 'objective': 'binary', 'verbose': 0, 'num_leaves': 2**10 }
# train model
bst = gpb.train(params=params, train_set=data_train, gp_model=gp_model, num_boost_round=32)
gp_model.summary() # estimated covariance parameters
plt=gpb.plot_importance(bst)`
Thanks a lot for reporting this! I have fixed this. Starting from gpboost version 0.5.1 (on PyPI) pandas DataFrames cause no problems anymore in the grid_search_tune_parameters
function.
Feature importances can be obtained as follows:
feature_importances = bst.feature_importance(importance_type='gain')
See also here.
I am trying to perform tuning of parameters with the following approach, but I get an
# take() does not accept boolean indexers
The detailed error
Is there a problem with using the
pandas
data frame or what am I missing?