szilard / GBM-perf

Performance of various open source GBM implementations
MIT License
215 stars 28 forks source link

lightgbm with categorical_feature encoding (instead of OHE) #38

Closed szilard closed 4 years ago

szilard commented 4 years ago

Instead of OHE (with sparse.model.matrix) we can use lightgbm's special encoding in which the data is stored as integers but it it treated as categorical:

OHE:

X_train_test <- sparse.model.matrix(dep_delayed_15min ~ .-1, data = rbind(d_train, d_test))
n1 <- nrow(d_train)
n2 <- nrow(d_test)
X_train <- X_train_test[1:n1,]
X_test <- X_train_test[(n1+1):(n1+n2),]

dlgb_train <- lgb.Dataset(data = X_train, label = ifelse(d_train$dep_delayed_15min=='Y',1,0))

cat(system.time({
  md <- lgb.train(data = dlgb_train, 
            objective = "binary", 
            nrounds = 100, num_leaves = 512, learning_rate = 0.1, 
            verbose = 0)
})[[3]]," ",sep="")

cat.enc:

d_all <- rbind(d_train, d_test)
d_all$dep_delayed_15min <- ifelse(d_all$dep_delayed_15min=="Y",1,0)

d_all_wrules <- lgb.convert_with_rules(d_all)       
d_all <- d_all_wrules$data
cols_cats <- names(d_all_wrules$rules) 

d_train <- d_all[1:nrow(d_train)]
d_test <- d_all[(nrow(d_train)+1):(nrow(d_train)+nrow(d_test))]

p <- ncol(d_all)-1
dlgb_train <- lgb.Dataset(data = as.matrix(d_train[,1:p]), label = d_train$dep_delayed_15min)

cat(system.time({
  md <- lgb.train(data = dlgb_train, 
            objective = "binary", 
            nrounds = 100, num_leaves = 512, learning_rate = 0.1, 
            categorical_feature = cols_cats,
            verbose = 0)
})[[3]]," ",sep="")

The main diff:

OHE:

X_train_test <- sparse.model.matrix(dep_delayed_15min ~ .-1, data = rbind(d_train, d_test))

  md <- lgb.train(data = dlgb_train, 
            nrounds = 100, num_leaves = 512, learning_rate = 0.1, 

cat.enc:

d_all_wrules <- lgb.convert_with_rules(d_all)       
d_all <- d_all_wrules$data
cols_cats <- names(d_all_wrules$rules) 

cat(system.time({
  md <- lgb.train(data = dlgb_train, 
            nrounds = 100, num_leaves = 512, learning_rate = 0.1, 
            categorical_feature = cols_cats,

Full code here: https://github.com/szilard/GBM-perf/tree/master/wip-testing/lightgbm-catenc/cpu/run

Timings [sec] and AUC:

CPU r4.8xlarge:

0.1m:
lightgbm OHE 2.107 0.7301411
lightgbm catenc 2.137 0.7174663
1m:
lightgbm OHE 3.998 0.7655526
lightgbm catenc 4.058 0.7650181
10m:
lightgbm OHE 20.749 0.7745457
lightgbm catenc 20.845 0.792273
szilard commented 4 years ago

GPU p3.2xlarge (Tesla V100):

Same code just add device = "gpu" to the training function:

  md <- lgb.train(data = dlgb_train, 
            nrounds = 100, num_leaves = 512, learning_rate = 0.1, 
            device = "gpu",

Full code here: https://github.com/szilard/GBM-perf/tree/master/wip-testing/lightgbm-catenc/gpu

Timings [sec] and AUC:

0.1m:
lightgbm OHE 8.565 0.7301211
lightgbm catenc 8.665 0.7155734
1m:
lightgbm OHE 14.461 0.766018
lightgbm catenc 11.929 0.7676921
10m:
lightgbm OHE 68.609 0.7749303
lightgbm catenc 50.024 0.7926504

another run:

0.1m:
lightgbm OHE 8.565 0.7301271
lightgbm catenc 8.285 0.7155633
1m:
lightgbm OHE 14.855 0.7659496
lightgbm catenc 12.595 0.7632631
10m:
lightgbm OHE 67.979 0.7748983
lightgbm catenc 49.798 0.7925457
szilard commented 4 years ago

Changed benchmark to use cat.enc. instead of OHE (This is in line with the other tools, h2o and catboost using their own cat.encodings while xgboost falls back to OHE).

Commit here: https://github.com/szilard/GBM-perf/commit/e774fcceec44f16c58003ec09a2e56cc832dd113

Last README before the commit:

https://github.com/szilard/GBM-perf/blob/c5d3bf7b8433ea62c09548dbd810998336097a8f/README.md

Tables with the results having both for easy comparison:

Screen Shot 2020-09-13 at 11 55 33 AM

Screen Shot 2020-09-13 at 11 55 39 AM