microsoft / LightGBM

A fast, distributed, high performance gradient boosting (GBT, GBDT, GBRT, GBM or MART) framework based on decision tree algorithms, used for ranking, classification and many other machine learning tasks.
https://lightgbm.readthedocs.io/en/latest/
MIT License
16.38k stars 3.81k forks source link

[R-package] learning-to-rank tests are broken on Solaris 10 and 32-bit Windows #3513

Open jameslamb opened 3 years ago

jameslamb commented 3 years ago

I ran the R tests on Solaris using R Hub tonight, and found that they're broken in one of the two Solaris environments that platform supports.

Oracle Solaris 10, x86, 32 bit, R-release, Oracle Developer Studio 12.6Oracle Solaris 10, x86, 32 bit, R-release

I don't THINK this will block our next attempt at CRAN in #3484 . It looks like CRAN's Solaris environment is the "Oracle Developer Studio" one, based on https://cran.r-project.org/web/checks/check_flavors.html#r-patched-solaris-x86.

Screen Shot 2020-10-31 at 10 18 19 PM

The tests that are failing are both learning-to-rank tests checking the values of the NDCG at different positions...so I'm guessing the failures are related to the changes in #3425 .

logs from the failing tests

[LightGBM] [Info] Total Bins 40
[LightGBM] [Info] Number of data points in the train set: 4500, number of used features: 20
[1] "[1]:  valid's ndcg@1:0.675+0.0829156  valid's ndcg@2:0.655657+0.0625302  valid's ndcg@3:0.648464+0.0613335"
[1] "[2]:  valid's ndcg@1:0.725+0.108972  valid's ndcg@2:0.666972+0.131409  valid's ndcg@3:0.657124+0.130448"
[1] "[3]:  valid's ndcg@1:0.65+0.111803  valid's ndcg@2:0.630657+0.125965  valid's ndcg@3:0.646928+0.15518"
[1] "[4]:  valid's ndcg@1:0.725+0.0829156  valid's ndcg@2:0.647629+0.120353  valid's ndcg@3:0.654052+0.129471"
[1] "[5]:  valid's ndcg@1:0.75+0.165831  valid's ndcg@2:0.662958+0.142544  valid's ndcg@3:0.648186+0.130213"
[1] "[6]:  valid's ndcg@1:0.725+0.129904  valid's ndcg@2:0.647629+0.108136  valid's ndcg@3:0.648186+0.106655"
[1] "[7]:  valid's ndcg@1:0.75+0.165831  valid's ndcg@2:0.653287+0.14255  valid's ndcg@3:0.64665+0.119557"
[1] "[8]:  valid's ndcg@1:0.725+0.129904  valid's ndcg@2:0.637958+0.123045  valid's ndcg@3:0.64665+0.119557"
[1] "[9]:  valid's ndcg@1:0.75+0.15  valid's ndcg@2:0.711315+0.101634  valid's ndcg@3:0.702794+0.100252"
[1] "[10]:  valid's ndcg@1:0.75+0.165831  valid's ndcg@2:0.682301+0.117876  valid's ndcg@3:0.66299+0.121243"
── FAILURE (test_learning_to_rank.R:125:5): learning-to-rank with lgb.cv() works
all(...) is not TRUE

`actual`:   FALSE
`expected`: TRUE 

── FAILURE (test_learning_to_rank.R:131:5): learning-to-rank with lgb.cv() works
all(...) is not TRUE

`actual`:   FALSE
`expected`: TRUE 

The test this comes from: https://github.com/microsoft/LightGBM/blob/13194d2bea49bebec0d892f9bd388864f2106c94/R-package/tests/testthat/test_learning_to_rank.R#L131

full test results ```text R version 4.0.3 (2020-10-10) -- "Bunny-Wunnies Freak Out" Copyright (C) 2020 The R Foundation for Statistical Computing Platform: i386-pc-solaris2.10 (32-bit) R is free software and comes with ABSOLUTELY NO WARRANTY. You are welcome to redistribute it under certain conditions. Type 'license()' or 'licence()' for distribution details. R is a collaborative project with many contributors. Type 'contributors()' for more information and 'citation()' on how to cite R or R packages in publications. Type 'demo()' for some demos, 'help()' for on-line help, or 'help.start()' for an HTML browser interface to help. Type 'q()' to quit R. > library(testthat) > library(lightgbm) Loading required package: R6 > > test_check( + package = "lightgbm" + , stop_on_failure = TRUE + , stop_on_warning = FALSE + ) [LightGBM] [Info] Number of positive: 3140, number of negative: 3373 [LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001250 seconds. You can set `force_row_wise=true` to remove the overhead. And if memory is not enough, you can set `force_col_wise=true`. [LightGBM] [Info] Total Bins 214 [LightGBM] [Info] Number of data points in the train set: 6513, number of used features: 107 [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.482113 -> initscore=-0.071580 [LightGBM] [Info] Start training from score -0.071580 [1] "[1]: train's binary_logloss:0.314167 test's binary_logloss:0.317777" [1] "[2]: train's binary_logloss:0.187654 test's binary_logloss:0.187981" [1] "[3]: train's binary_logloss:0.109209 test's binary_logloss:0.109949" [1] "[4]: train's binary_logloss:0.0755423 test's binary_logloss:0.0772008" [1] "[5]: train's binary_logloss:0.0528045 test's binary_logloss:0.0533291" [1] "[6]: train's binary_logloss:0.0395797 test's binary_logloss:0.0380824" [1] "[7]: train's binary_logloss:0.0287269 test's binary_logloss:0.0255364" [1] "[8]: train's binary_logloss:0.0224443 test's binary_logloss:0.0195616" [1] "[9]: train's binary_logloss:0.016621 test's binary_logloss:0.017834" [1] "[10]: train's binary_logloss:0.0112055 test's binary_logloss:0.0125538" [1] "[11]: train's binary_logloss:0.00759638 test's binary_logloss:0.00842372" [1] "[12]: train's binary_logloss:0.0054887 test's binary_logloss:0.00631812" [1] "[13]: train's binary_logloss:0.00399548 test's binary_logloss:0.00454944" [1] "[14]: train's binary_logloss:0.00283135 test's binary_logloss:0.00323724" [1] "[15]: train's binary_logloss:0.00215378 test's binary_logloss:0.00256697" [1] "[16]: train's binary_logloss:0.00156723 test's binary_logloss:0.00181753" [1] "[17]: train's binary_logloss:0.00120077 test's binary_logloss:0.00144437" [1] "[18]: train's binary_logloss:0.000934889 test's binary_logloss:0.00111807" [1] "[19]: train's binary_logloss:0.000719878 test's binary_logloss:0.000878304" [1] "[20]: train's binary_logloss:0.000558692 test's binary_logloss:0.000712272" [1] "[21]: train's binary_logloss:0.000400916 test's binary_logloss:0.000492223" [1] "[22]: train's binary_logloss:0.000315938 test's binary_logloss:0.000402804" [1] "[23]: train's binary_logloss:0.000238113 test's binary_logloss:0.000288682" [1] "[24]: train's binary_logloss:0.000190248 test's binary_logloss:0.000237835" [1] "[25]: train's binary_logloss:0.000148322 test's binary_logloss:0.000174674" [1] "[26]: train's binary_logloss:0.000120581 test's binary_logloss:0.000139513" [1] "[27]: train's binary_logloss:0.000102756 test's binary_logloss:0.000118804" [1] "[28]: train's binary_logloss:7.83011e-05 test's binary_logloss:8.40978e-05" [1] "[29]: train's binary_logloss:6.29191e-05 test's binary_logloss:6.8803e-05" [1] "[30]: train's binary_logloss:5.28039e-05 test's binary_logloss:5.89864e-05" [1] "[31]: train's binary_logloss:4.51561e-05 test's binary_logloss:4.91874e-05" [1] "[32]: train's binary_logloss:3.89402e-05 test's binary_logloss:4.13015e-05" [1] "[33]: train's binary_logloss:3.24434e-05 test's binary_logloss:3.52605e-05" [1] "[34]: train's binary_logloss:2.65255e-05 test's binary_logloss:2.86338e-05" [1] "[35]: train's binary_logloss:2.19277e-05 test's binary_logloss:2.3937e-05" [1] "[36]: train's binary_logloss:1.86469e-05 test's binary_logloss:2.05375e-05" [1] "[37]: train's binary_logloss:1.49881e-05 test's binary_logloss:1.53852e-05" [1] "[38]: train's binary_logloss:1.2103e-05 test's binary_logloss:1.20722e-05" [1] "[39]: train's binary_logloss:1.02027e-05 test's binary_logloss:1.0578e-05" [1] "[40]: train's binary_logloss:8.91561e-06 test's binary_logloss:8.8323e-06" [1] "[41]: train's binary_logloss:7.4855e-06 test's binary_logloss:7.58441e-06" [1] "[42]: train's binary_logloss:6.21179e-06 test's binary_logloss:6.14299e-06" [1] "[43]: train's binary_logloss:5.06413e-06 test's binary_logloss:5.13576e-06" [1] "[44]: train's binary_logloss:4.2029e-06 test's binary_logloss:4.53605e-06" [1] "[45]: train's binary_logloss:3.47042e-06 test's binary_logloss:3.73234e-06" [1] "[46]: train's binary_logloss:2.78181e-06 test's binary_logloss:3.02556e-06" [1] "[47]: train's binary_logloss:2.19819e-06 test's binary_logloss:2.3666e-06" [1] "[48]: train's binary_logloss:1.80519e-06 test's binary_logloss:1.92932e-06" [1] "[49]: train's binary_logloss:1.50192e-06 test's binary_logloss:1.64658e-06" [1] "[50]: train's binary_logloss:1.20212e-06 test's binary_logloss:1.33316e-06" [LightGBM] [Info] Number of positive: 3140, number of negative: 3373 [LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001232 seconds. You can set `force_row_wise=true` to remove the overhead. And if memory is not enough, you can set `force_col_wise=true`. [LightGBM] [Info] Total Bins 214 [LightGBM] [Info] Number of data points in the train set: 6513, number of used features: 107 [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.482113 -> initscore=-0.071580 [LightGBM] [Info] Start training from score -0.071580 [1] "[1]: train's binary_error:0.0222632" [1] "[2]: train's binary_error:0.0222632" [1] "[3]: train's binary_error:0.0222632" [1] "[4]: train's binary_error:0.0109013" [1] "[5]: train's binary_error:0.0141256" [1] "[6]: train's binary_error:0.0141256" [1] "[7]: train's binary_error:0.0141256" [1] "[8]: train's binary_error:0.0141256" [1] "[9]: train's binary_error:0.00598802" [1] "[10]: train's binary_error:0.00598802" [LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.000023 seconds. You can set `force_col_wise=true` to remove the overhead. [LightGBM] [Info] Total Bins 98 [LightGBM] [Info] Number of data points in the train set: 150, number of used features: 4 [LightGBM] [Info] Start training from score -1.098612 [LightGBM] [Info] Start training from score -1.098612 [LightGBM] [Info] Start training from score -1.098612 [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [1] "[1]: train's multi_error:0.0466667" [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [1] "[2]: train's multi_error:0.0466667" [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [1] "[3]: train's multi_error:0.0466667" [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [1] "[4]: train's multi_error:0.0466667" [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [1] "[5]: train's multi_error:0.0466667" [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [1] "[6]: train's multi_error:0.0466667" [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [1] "[7]: train's multi_error:0.0466667" [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [1] "[8]: train's multi_error:0.0466667" [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [1] "[9]: train's multi_error:0.0466667" [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [1] "[10]: train's multi_error:0.0466667" [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [1] "[11]: train's multi_error:0.0333333" [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [1] "[12]: train's multi_error:0.0266667" [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [1] "[13]: train's multi_error:0.0266667" [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [1] "[14]: train's multi_error:0.0266667" [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [1] "[15]: train's multi_error:0.0266667" [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [1] "[16]: train's multi_error:0.0333333" [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [1] "[17]: train's multi_error:0.0266667" [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [1] "[18]: train's multi_error:0.0333333" [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [1] "[19]: train's multi_error:0.0333333" [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [1] "[20]: train's multi_error:0.0333333" [LightGBM] [Info] Number of positive: 3140, number of negative: 3373 [LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001289 seconds. You can set `force_row_wise=true` to remove the overhead. And if memory is not enough, you can set `force_col_wise=true`. [LightGBM] [Info] Total Bins 214 [LightGBM] [Info] Number of data points in the train set: 6513, number of used features: 107 [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.482113 -> initscore=-0.071580 [LightGBM] [Info] Start training from score -0.071580 [1] "[1]: train's binary_error:0.0304007 train's auc:0.972508 train's binary_logloss:0.198597" [1] "[2]: train's binary_error:0.0222632 train's auc:0.995075 train's binary_logloss:0.111535" [1] "[3]: train's binary_error:0.00598802 train's auc:0.997845 train's binary_logloss:0.0480659" [1] "[4]: train's binary_error:0.00122831 train's auc:0.998433 train's binary_logloss:0.0279151" [1] "[5]: train's binary_error:0.00122831 train's auc:0.999354 train's binary_logloss:0.0190479" [1] "[6]: train's binary_error:0.00537387 train's auc:0.98965 train's binary_logloss:0.167059" [1] "[7]: train's binary_error:0 train's auc:1 train's binary_logloss:0.0128449" [1] "[8]: train's binary_error:0 train's auc:1 train's binary_logloss:0.00774702" [1] "[9]: train's binary_error:0 train's auc:1 train's binary_logloss:0.00472108" [1] "[10]: train's binary_error:0 train's auc:1 train's binary_logloss:0.00208929" [LightGBM] [Info] Number of positive: 3140, number of negative: 3373 [LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001259 seconds. You can set `force_row_wise=true` to remove the overhead. And if memory is not enough, you can set `force_col_wise=true`. [LightGBM] [Info] Total Bins 214 [LightGBM] [Info] Number of data points in the train set: 6513, number of used features: 107 [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.482113 -> initscore=-0.071580 [LightGBM] [Info] Start training from score -0.071580 [1] "[1]: train's binary_error:0.0222632" [1] "[2]: train's binary_error:0.0222632" [1] "[3]: train's binary_error:0.0222632" [1] "[4]: train's binary_error:0.0109013" [1] "[5]: train's binary_error:0.0141256" [1] "[6]: train's binary_error:0.0141256" [1] "[7]: train's binary_error:0.0141256" [1] "[8]: train's binary_error:0.0141256" [1] "[9]: train's binary_error:0.00598802" [1] "[10]: train's binary_error:0.00598802" [LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001568 seconds. You can set `force_row_wise=true` to remove the overhead. And if memory is not enough, you can set `force_col_wise=true`. [LightGBM] [Info] Total Bins 214 [LightGBM] [Info] Number of data points in the train set: 6513, number of used features: 107 [LightGBM] [Info] Start training from score 0.482113 [1] "[1]: train's l2:0.206337" [1] "[2]: train's l2:0.171229" [1] "[3]: train's l2:0.140871" [1] "[4]: train's l2:0.116282" [1] "[5]: train's l2:0.096364" [1] "[6]: train's l2:0.0802308" [1] "[7]: train's l2:0.0675595" [1] "[8]: train's l2:0.0567154" [1] "[9]: train's l2:0.0482086" [1] "[10]: train's l2:0.0402694" [LightGBM] [Info] Number of positive: 3140, number of negative: 3373 [LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001231 seconds. You can set `force_row_wise=true` to remove the overhead. And if memory is not enough, you can set `force_col_wise=true`. [LightGBM] [Info] Total Bins 214 [LightGBM] [Info] Number of data points in the train set: 6513, number of used features: 107 [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.482113 -> initscore=-0.071580 [LightGBM] [Info] Start training from score -0.071580 [1] "[1]: train's binary_error:0.0222632 train's auc:0.981784 valid1's binary_error:0.0222632 valid1's auc:0.981784 valid2's binary_error:0.0222632 valid2's auc:0.981784" [1] "[2]: train's binary_error:0.0222632 train's auc:0.981784 valid1's binary_error:0.0222632 valid1's auc:0.981784 valid2's binary_error:0.0222632 valid2's auc:0.981784" [1] "[3]: train's binary_error:0.0222632 train's auc:0.992951 valid1's binary_error:0.0222632 valid1's auc:0.992951 valid2's binary_error:0.0222632 valid2's auc:0.992951" [1] "[4]: train's binary_error:0.0109013 train's auc:0.992951 valid1's binary_error:0.0109013 valid1's auc:0.992951 valid2's binary_error:0.0109013 valid2's auc:0.992951" [1] "[5]: train's binary_error:0.0141256 train's auc:0.994714 valid1's binary_error:0.0141256 valid1's auc:0.994714 valid2's binary_error:0.0141256 valid2's auc:0.994714" [1] "[6]: train's binary_error:0.0141256 train's auc:0.994714 valid1's binary_error:0.0141256 valid1's auc:0.994714 valid2's binary_error:0.0141256 valid2's auc:0.994714" [1] "[7]: train's binary_error:0.0141256 train's auc:0.994714 valid1's binary_error:0.0141256 valid1's auc:0.994714 valid2's binary_error:0.0141256 valid2's auc:0.994714" [1] "[8]: train's binary_error:0.0141256 train's auc:0.994714 valid1's binary_error:0.0141256 valid1's auc:0.994714 valid2's binary_error:0.0141256 valid2's auc:0.994714" [1] "[9]: train's binary_error:0.00598802 train's auc:0.993175 valid1's binary_error:0.00598802 valid1's auc:0.993175 valid2's binary_error:0.00598802 valid2's auc:0.993175" [1] "[10]: train's binary_error:0.00598802 train's auc:0.998242 valid1's binary_error:0.00598802 valid1's auc:0.998242 valid2's binary_error:0.00598802 valid2's auc:0.998242" [LightGBM] [Info] Number of positive: 3140, number of negative: 3373 [LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001213 seconds. You can set `force_row_wise=true` to remove the overhead. And if memory is not enough, you can set `force_col_wise=true`. [LightGBM] [Info] Total Bins 214 [LightGBM] [Info] Number of data points in the train set: 6513, number of used features: 107 [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.482113 -> initscore=-0.071580 [LightGBM] [Info] Start training from score -0.071580 [1] "[1]: train's binary_logloss:0.179606" [1] "[2]: train's binary_logloss:0.0975448" [1] "[3]: train's binary_logloss:0.0384292" [1] "[4]: train's binary_logloss:0.0582241" [1] "[5]: train's binary_logloss:0.0595215" [1] "[6]: train's binary_logloss:0.0609174" [1] "[7]: train's binary_logloss:0.317567" [1] "[8]: train's binary_logloss:0.0104223" [1] "[9]: train's binary_logloss:0.00497498" [1] "[10]: train's binary_logloss:0.00283557" [LightGBM] [Info] Number of positive: 3140, number of negative: 3373 [LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001231 seconds. You can set `force_row_wise=true` to remove the overhead. And if memory is not enough, you can set `force_col_wise=true`. [LightGBM] [Info] Total Bins 214 [LightGBM] [Info] Number of data points in the train set: 6513, number of used features: 107 [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.482113 -> initscore=-0.071580 [LightGBM] [Info] Start training from score -0.071580 [1] "[1]: train's binary_logloss:0.179606" [1] "[2]: train's binary_logloss:0.0975448" [1] "[3]: train's binary_logloss:0.0384292" [1] "[4]: train's binary_logloss:0.0582241" [1] "[5]: train's binary_logloss:0.0595215" [LightGBM] [Info] Number of positive: 3140, number of negative: 3373 [LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001244 seconds. You can set `force_row_wise=true` to remove the overhead. And if memory is not enough, you can set `force_col_wise=true`. [LightGBM] [Info] Total Bins 214 [LightGBM] [Info] Number of data points in the train set: 6513, number of used features: 107 [1] "[6]: train's binary_logloss:0.0609174" [1] "[7]: train's binary_logloss:0.317567" [1] "[8]: train's binary_logloss:0.0104223" [1] "[9]: train's binary_logloss:0.00497498" [1] "[10]: train's binary_logloss:0.00283557" [LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001075 seconds. You can set `force_row_wise=true` to remove the overhead. And if memory is not enough, you can set `force_col_wise=true`. [LightGBM] [Info] Total Bins 232 [LightGBM] [Info] Number of data points in the train set: 5211, number of used features: 116 [LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001065 seconds. You can set `force_row_wise=true` to remove the overhead. And if memory is not enough, you can set `force_col_wise=true`. [LightGBM] [Info] Total Bins 232 [LightGBM] [Info] Number of data points in the train set: 5211, number of used features: 116 [LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001059 seconds. You can set `force_row_wise=true` to remove the overhead. And if memory is not enough, you can set `force_col_wise=true`. [LightGBM] [Info] Total Bins 232 [LightGBM] [Info] Number of data points in the train set: 5210, number of used features: 116 [LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001067 seconds. You can set `force_row_wise=true` to remove the overhead. And if memory is not enough, you can set `force_col_wise=true`. [LightGBM] [Info] Total Bins 232 [LightGBM] [Info] Number of data points in the train set: 5210, number of used features: 116 [LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001069 seconds. You can set `force_row_wise=true` to remove the overhead. And if memory is not enough, you can set `force_col_wise=true`. [LightGBM] [Info] Total Bins 232 [LightGBM] [Info] Number of data points in the train set: 5210, number of used features: 116 [LightGBM] [Info] Start training from score 0.483976 [LightGBM] [Warning] No further splits with positive gain, best gain: 0.000000 [LightGBM] [Info] Start training from score 0.480906 [LightGBM] [Warning] No further splits with positive gain, best gain: 0.000000 [LightGBM] [Info] Start training from score 0.481574 [LightGBM] [Warning] No further splits with positive gain, best gain: 0.000000 [LightGBM] [Info] Start training from score 0.482342 [LightGBM] [Warning] No further splits with positive gain, best gain: 0.000000 [LightGBM] [Info] Start training from score 0.481766 [LightGBM] [Warning] No further splits with positive gain, best gain: 0.000000 [1] "[1]: valid's l2:0.000306984+0.000613968 valid's l1:0.000306994+0.00061397" [LightGBM] [Warning] No further splits with positive gain, best gain: 0.000000 [LightGBM] [Warning] No further splits with positive gain, best gain: 0.000000 [LightGBM] [Warning] Stopped training because there are no more leaves that meet the split requirements [LightGBM] [Warning] No further splits with positive gain, best gain: 0.000000 [LightGBM] [Warning] No further splits with positive gain, best gain: 0.000000 [LightGBM] [Warning] Stopped training because there are no more leaves that meet the split requirements [LightGBM] [Warning] No further splits with positive gain, best gain: 0.000000 [1] "[2]: valid's l2:0.000306984+0.000613968 valid's l1:0.000306986+0.000613967" [LightGBM] [Warning] No further splits with positive gain, best gain: 0.000000 [LightGBM] [Warning] No further splits with positive gain, best gain: 0.000000 [LightGBM] [Warning] Stopped training because there are no more leaves that meet the split requirements [LightGBM] [Warning] No further splits with positive gain, best gain: 0.000000 [LightGBM] [Warning] No further splits with positive gain, best gain: 0.000000 [LightGBM] [Warning] Stopped training because there are no more leaves that meet the split requirements [LightGBM] [Warning] No further splits with positive gain, best gain: 0.000000 [1] "[3]: valid's l2:0.000306984+0.000613968 valid's l1:0.000306986+0.000613967" [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [LightGBM] [Warning] Stopped training because there are no more leaves that meet the split requirements [LightGBM] [Warning] No further splits with positive gain, best gain: 0.000000 [LightGBM] [Warning] Stopped training because there are no more leaves that meet the split requirements [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [LightGBM] [Warning] Stopped training because there are no more leaves that meet the split requirements [LightGBM] [Warning] No further splits with positive gain, best gain: 0.000000 [LightGBM] [Warning] Stopped training because there are no more leaves that meet the split requirements [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [LightGBM] [Warning] Stopped training because there are no more leaves that meet the split requirements [1] "[4]: valid's l2:0.000306984+0.000613968 valid's l1:0.000306986+0.000613967" [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [LightGBM] [Warning] Stopped training because there are no more leaves that meet the split requirements [LightGBM] [Warning] No further splits with positive gain, best gain: 0.000000 [LightGBM] [Warning] Stopped training because there are no more leaves that meet the split requirements [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [LightGBM] [Warning] Stopped training because there are no more leaves that meet the split requirements [LightGBM] [Warning] No further splits with positive gain, best gain: 0.000000 [LightGBM] [Warning] Stopped training because there are no more leaves that meet the split requirements [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [LightGBM] [Warning] Stopped training because there are no more leaves that meet the split requirements [1] "[5]: valid's l2:0.000306984+0.000613968 valid's l1:0.000306986+0.000613967" [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [LightGBM] [Warning] Stopped training because there are no more leaves that meet the split requirements [LightGBM] [Warning] No further splits with positive gain, best gain: 0.000000 [LightGBM] [Warning] Stopped training because there are no more leaves that meet the split requirements [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [LightGBM] [Warning] Stopped training because there are no more leaves that meet the split requirements [LightGBM] [Warning] No further splits with positive gain, best gain: 0.000000 [LightGBM] [Warning] Stopped training because there are no more leaves that meet the split requirements [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [LightGBM] [Warning] Stopped training because there are no more leaves that meet the split requirements [1] "[6]: valid's l2:0.000306984+0.000613968 valid's l1:0.000306986+0.000613967" [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [LightGBM] [Warning] Stopped training because there are no more leaves that meet the split requirements [LightGBM] [Warning] No further splits with positive gain, best gain: 0.000000 [LightGBM] [Warning] Stopped training because there are no more leaves that meet the split requirements [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [LightGBM] [Warning] Stopped training because there are no more leaves that meet the split requirements [LightGBM] [Warning] No further splits with positive gain, best gain: 0.000000 [LightGBM] [Warning] Stopped training because there are no more leaves that meet the split requirements [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [LightGBM] [Warning] Stopped training because there are no more leaves that meet the split requirements [1] "[7]: valid's l2:0.000306984+0.000613968 valid's l1:0.000306986+0.000613967" [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [LightGBM] [Warning] Stopped training because there are no more leaves that meet the split requirements [LightGBM] [Warning] No further splits with positive gain, best gain: 0.000000 [LightGBM] [Warning] Stopped training because there are no more leaves that meet the split requirements [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [LightGBM] [Warning] Stopped training because there are no more leaves that meet the split requirements [LightGBM] [Warning] No further splits with positive gain, best gain: 0.000000 [LightGBM] [Warning] Stopped training because there are no more leaves that meet the split requirements [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [LightGBM] [Warning] Stopped training because there are no more leaves that meet the split requirements [1] "[8]: valid's l2:0.000306984+0.000613968 valid's l1:0.000306986+0.000613967" [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [LightGBM] [Warning] Stopped training because there are no more leaves that meet the split requirements [LightGBM] [Warning] No further splits with positive gain, best gain: 0.000000 [LightGBM] [Warning] Stopped training because there are no more leaves that meet the split requirements [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [LightGBM] [Warning] Stopped training because there are no more leaves that meet the split requirements [LightGBM] [Warning] No further splits with positive gain, best gain: 0.000000 [LightGBM] [Warning] Stopped training because there are no more leaves that meet the split requirements [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [LightGBM] [Warning] Stopped training because there are no more leaves that meet the split requirements [1] "[9]: valid's l2:0.000306984+0.000613968 valid's l1:0.000306986+0.000613967" [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [LightGBM] [Warning] Stopped training because there are no more leaves that meet the split requirements [LightGBM] [Warning] No further splits with positive gain, best gain: 0.000000 [LightGBM] [Warning] Stopped training because there are no more leaves that meet the split requirements [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [LightGBM] [Warning] Stopped training because there are no more leaves that meet the split requirements [LightGBM] [Warning] No further splits with positive gain, best gain: 0.000000 [LightGBM] [Warning] Stopped training because there are no more leaves that meet the split requirements [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [LightGBM] [Warning] Stopped training because there are no more leaves that meet the split requirements [1] "[10]: valid's l2:0.000306984+0.000613968 valid's l1:0.000306986+0.000613967" [LightGBM] [Info] Number of positive: 198, number of negative: 202 [LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.000017 seconds. You can set `force_col_wise=true` to remove the overhead. [LightGBM] [Info] Total Bins 167 [LightGBM] [Info] Number of data points in the train set: 400, number of used features: 1 [LightGBM] [Info] Number of positive: 196, number of negative: 204 [LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.000015 seconds. You can set `force_col_wise=true` to remove the overhead. [LightGBM] [Info] Total Bins 167 [LightGBM] [Info] Number of data points in the train set: 400, number of used features: 1 [LightGBM] [Info] Number of positive: 207, number of negative: 193 [LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.000015 seconds. You can set `force_col_wise=true` to remove the overhead. [LightGBM] [Info] Total Bins 167 [LightGBM] [Info] Number of data points in the train set: 400, number of used features: 1 [LightGBM] [Info] Number of positive: 207, number of negative: 193 [LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.000015 seconds. You can set `force_col_wise=true` to remove the overhead. [LightGBM] [Info] Total Bins 167 [LightGBM] [Info] Number of data points in the train set: 400, number of used features: 1 [LightGBM] [Info] Number of positive: 192, number of negative: 208 [LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.000015 seconds. You can set `force_col_wise=true` to remove the overhead. [LightGBM] [Info] Total Bins 167 [LightGBM] [Info] Number of data points in the train set: 400, number of used features: 1 [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.495000 -> initscore=-0.020001 [LightGBM] [Info] Start training from score -0.020001 [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.490000 -> initscore=-0.040005 [LightGBM] [Info] Start training from score -0.040005 [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.517500 -> initscore=0.070029 [LightGBM] [Info] Start training from score 0.070029 [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.517500 -> initscore=0.070029 [LightGBM] [Info] Start training from score 0.070029 [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.480000 -> initscore=-0.080043 [LightGBM] [Info] Start training from score -0.080043 [1] "[1]: valid's auc:0.476662+0.0622898 valid's binary_error:0.5+0.0593296" [1] "[2]: valid's auc:0.477476+0.0393392 valid's binary_error:0.554+0.0372022" [1] "[3]: valid's auc:0.456927+0.042898 valid's binary_error:0.526+0.0361109" [1] "[4]: valid's auc:0.419531+0.0344972 valid's binary_error:0.54+0.0289828" [1] "[5]: valid's auc:0.459109+0.0862237 valid's binary_error:0.52+0.0489898" [1] "[6]: valid's auc:0.460522+0.0911246 valid's binary_error:0.528+0.0231517" [1] "[7]: valid's auc:0.456328+0.0540445 valid's binary_error:0.532+0.0386782" [1] "[8]: valid's auc:0.463653+0.0660907 valid's binary_error:0.514+0.0488262" [1] "[9]: valid's auc:0.443017+0.0549965 valid's binary_error:0.55+0.0303315" [1] "[10]: valid's auc:0.477483+0.0763283 valid's binary_error:0.488+0.0549181" [LightGBM] [Info] Number of positive: 3140, number of negative: 3373 [LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001221 seconds. You can set `force_row_wise=true` to remove the overhead. And if memory is not enough, you can set `force_col_wise=true`. [LightGBM] [Info] Total Bins 214 [LightGBM] [Info] Number of data points in the train set: 6513, number of used features: 107 [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.482113 -> initscore=-0.071580 [LightGBM] [Info] Start training from score -0.071580 [LightGBM] [Warning] No further splits with positive gain, best gain: 0.000000 [1] "[1]: train's binary_error:0.00307078 train's auc:0.99996 train's binary_logloss:0.132074" [LightGBM] [Warning] No further splits with positive gain, best gain: 0.000000 [1] "[2]: train's binary_error:0.00153539 train's auc:1 train's binary_logloss:0.0444372" [LightGBM] [Warning] No further splits with positive gain, best gain: 0.000000 [1] "[3]: train's binary_error:0 train's auc:1 train's binary_logloss:0.0159408" [LightGBM] [Warning] No further splits with positive gain, best gain: 0.000000 [1] "[4]: train's binary_error:0 train's auc:1 train's binary_logloss:0.00590065" [LightGBM] [Warning] No further splits with positive gain, best gain: 0.000000 [1] "[5]: train's binary_error:0 train's auc:1 train's binary_logloss:0.00230167" [LightGBM] [Warning] No further splits with positive gain, best gain: 0.000000 [1] "[6]: train's binary_error:0 train's auc:1 train's binary_logloss:0.00084253" [LightGBM] [Warning] No further splits with positive gain, best gain: 0.000000 [1] "[7]: train's binary_error:0 train's auc:1 train's binary_logloss:0.000309409" [LightGBM] [Warning] No further splits with positive gain, best gain: 0.000000 [1] "[8]: train's binary_error:0 train's auc:1 train's binary_logloss:0.000113754" [LightGBM] [Warning] No further splits with positive gain, best gain: 0.000000 [1] "[9]: train's binary_error:0 train's auc:1 train's binary_logloss:4.1838e-05" [LightGBM] [Warning] No further splits with positive gain, best gain: 0.000000 [1] "[10]: train's binary_error:0 train's auc:1 train's binary_logloss:1.539e-05" [LightGBM] [Info] Number of positive: 3140, number of negative: 3373 [LightGBM] [Info] Total Bins 214 [LightGBM] [Info] Number of data points in the train set: 6513, number of used features: 107 [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.482113 -> initscore=-0.071580 [LightGBM] [Info] Start training from score -0.071580 [LightGBM] [Warning] No further splits with positive gain, best gain: 0.000000 [LightGBM] [Warning] No further splits with positive gain, best gain: 0.000000 [LightGBM] [Warning] No further splits with positive gain, best gain: 0.000000 [LightGBM] [Warning] No further splits with positive gain, best gain: 0.000000 [LightGBM] [Warning] No further splits with positive gain, best gain: 0.000000 [LightGBM] [Warning] No further splits with positive gain, best gain: 0.000000 [LightGBM] [Warning] No further splits with positive gain, best gain: 0.000000 [LightGBM] [Warning] No further splits with positive gain, best gain: 0.000000 [LightGBM] [Warning] No further splits with positive gain, best gain: 0.000000 [LightGBM] [Warning] No further splits with positive gain, best gain: 0.000000 [LightGBM] [Info] Number of positive: 3140, number of negative: 3373 [LightGBM] [Info] Total Bins 214 [LightGBM] [Info] Number of data points in the train set: 6513, number of used features: 107 [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.482113 -> initscore=-0.071580 [LightGBM] [Info] Start training from score -0.071580 [LightGBM] [Warning] No further splits with positive gain, best gain: 0.000000 [LightGBM] [Warning] No further splits with positive gain, best gain: 0.000000 [LightGBM] [Warning] No further splits with positive gain, best gain: 0.000000 [LightGBM] [Warning] No further splits with positive gain, best gain: 0.000000 [LightGBM] [Warning] No further splits with positive gain, best gain: 0.000000 [LightGBM] [Warning] No further splits with positive gain, best gain: 0.000000 [LightGBM] [Warning] No further splits with positive gain, best gain: 0.000000 [LightGBM] [Warning] No further splits with positive gain, best gain: 0.000000 [LightGBM] [Warning] No further splits with positive gain, best gain: 0.000000 [LightGBM] [Warning] No further splits with positive gain, best gain: 0.000000 [LightGBM] [Info] Number of positive: 35110, number of negative: 34890 [LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000358 seconds. You can set `force_row_wise=true` to remove the overhead. And if memory is not enough, you can set `force_col_wise=true`. [LightGBM] [Info] Total Bins 12 [LightGBM] [Info] Number of data points in the train set: 70000, number of used features: 1 [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.501571 -> initscore=0.006286 [LightGBM] [Info] Start training from score 0.006286 [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [LightGBM] [Info] Number of positive: 500, number of negative: 500 [LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.000029 seconds. You can set `force_col_wise=true` to remove the overhead. [LightGBM] [Info] Total Bins 3 [LightGBM] [Info] Number of data points in the train set: 1000, number of used features: 1 [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.500000 -> initscore=0.000000 [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [1] "[1]: valid1's binary_error:0" [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [1] "[2]: valid1's binary_error:0" [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [1] "[3]: valid1's binary_error:0" [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [1] "[4]: valid1's binary_error:0" [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [1] "[5]: valid1's binary_error:0" [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [1] "[6]: valid1's binary_error:0" [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [1] "[7]: valid1's binary_error:0" [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [1] "[8]: valid1's binary_error:0" [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [1] "[9]: valid1's binary_error:0" [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [1] "[10]: valid1's binary_error:0" [LightGBM] [Info] Number of positive: 500, number of negative: 500 [LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.000031 seconds. You can set `force_col_wise=true` to remove the overhead. [LightGBM] [Info] Total Bins 3 [LightGBM] [Info] Number of data points in the train set: 1000, number of used features: 1 [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.500000 -> initscore=0.000000 [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [1] "[1]: valid1's binary_error:0" [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [1] "[2]: valid1's binary_error:0" [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [1] "[3]: valid1's binary_error:0" [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [1] "[4]: valid1's binary_error:0" [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [1] "[5]: valid1's binary_error:0" [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [1] "[6]: valid1's binary_error:0" [LightGBM] [Info] Number of positive: 500, number of negative: 500 [LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.000028 seconds. You can set `force_col_wise=true` to remove the overhead. [LightGBM] [Info] Total Bins 3 [LightGBM] [Info] Number of data points in the train set: 1000, number of used features: 1 [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.500000 -> initscore=0.000000 [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [1] "[1]: valid1's binary_error:0" [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [1] "[2]: valid1's binary_error:0" [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [1] "[3]: valid1's binary_error:0" [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [1] "[4]: valid1's binary_error:0" [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [1] "[5]: valid1's binary_error:0" [LightGBM] [Info] Number of positive: 500, number of negative: 500 [LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.000031 seconds. You can set `force_col_wise=true` to remove the overhead. [LightGBM] [Info] Total Bins 3 [LightGBM] [Info] Number of data points in the train set: 1000, number of used features: 1 [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.500000 -> initscore=0.000000 [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [1] "[1]: valid1's binary_error:0" [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [1] "[2]: valid1's binary_error:0" [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [1] "[3]: valid1's binary_error:0" [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [1] "[4]: valid1's binary_error:0" [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [1] "[5]: valid1's binary_error:0" [LightGBM] [Info] Number of positive: 500, number of negative: 500 [LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.000031 seconds. You can set `force_col_wise=true` to remove the overhead. [LightGBM] [Info] Total Bins 3 [LightGBM] [Info] Number of data points in the train set: 1000, number of used features: 1 [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.500000 -> initscore=0.000000 [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [1] "[1]: valid1's binary_error:0" [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [1] "[2]: valid1's binary_error:0" [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [1] "[3]: valid1's binary_error:0" [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [1] "[4]: valid1's binary_error:0" [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [1] "[5]: valid1's binary_error:0" [LightGBM] [Info] Number of positive: 500, number of negative: 500 [LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.000030 seconds. You can set `force_col_wise=true` to remove the overhead. [LightGBM] [Info] Total Bins 3 [LightGBM] [Info] Number of data points in the train set: 1000, number of used features: 1 [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.500000 -> initscore=0.000000 [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [1] "[1]: valid1's binary_error:0" [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [1] "[2]: valid1's binary_error:0" [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [1] "[3]: valid1's binary_error:0" [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [1] "[4]: valid1's binary_error:0" [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [1] "[5]: valid1's binary_error:0" [LightGBM] [Info] Number of positive: 3140, number of negative: 3373 [LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001219 seconds. You can set `force_row_wise=true` to remove the overhead. And if memory is not enough, you can set `force_col_wise=true`. [LightGBM] [Info] Total Bins 214 [LightGBM] [Info] Number of data points in the train set: 6513, number of used features: 107 [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.482113 -> initscore=-0.071580 [LightGBM] [Info] Start training from score -0.071580 [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [1] "[1]: valid1's auc:0.987036" [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [1] "[2]: valid1's auc:0.987036" [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [1] "[3]: valid1's auc:0.998699" [LightGBM] [Warning] No further splits with positive gain, best gain: 0.000000 [1] "[4]: valid1's auc:0.998699" [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [1] "[5]: valid1's auc:0.998699" [LightGBM] [Warning] No further splits with positive gain, best gain: 0.000000 [1] "[6]: valid1's auc:0.999667" [LightGBM] [Warning] No further splits with positive gain, best gain: 0.000000 [1] "[7]: valid1's auc:0.999806" [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [1] "[8]: valid1's auc:0.999978" [LightGBM] [Warning] No further splits with positive gain, best gain: 0.000000 [1] "[9]: valid1's auc:0.999997" [LightGBM] [Warning] No further splits with positive gain, best gain: 0.000000 [1] "[10]: valid1's auc:0.999997" [LightGBM] [Info] Number of positive: 3140, number of negative: 3373 [LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001229 seconds. You can set `force_row_wise=true` to remove the overhead. And if memory is not enough, you can set `force_col_wise=true`. [LightGBM] [Info] Total Bins 214 [LightGBM] [Info] Number of data points in the train set: 6513, number of used features: 107 [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.482113 -> initscore=-0.071580 [LightGBM] [Info] Start training from score -0.071580 [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [1] "[1]: valid1's binary_error:0.016139" [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [1] "[2]: valid1's binary_error:0.016139" [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [1] "[3]: valid1's binary_error:0.016139" [LightGBM] [Warning] No further splits with positive gain, best gain: 0.000000 [1] "[4]: valid1's binary_error:0.016139" [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [1] "[5]: valid1's binary_error:0.016139" [LightGBM] [Warning] No further splits with positive gain, best gain: 0.000000 [1] "[6]: valid1's binary_error:0.016139" [LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.000031 seconds. You can set `force_col_wise=true` to remove the overhead. [LightGBM] [Info] Total Bins 3 [LightGBM] [Info] Number of data points in the train set: 1000, number of used features: 1 [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [1] "[1]: valid1's rmse:55" [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [1] "[2]: valid1's rmse:59.5" [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [1] "[3]: valid1's rmse:63.55" [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [1] "[4]: valid1's rmse:67.195" [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [1] "[5]: valid1's rmse:70.4755" [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [1] "[6]: valid1's rmse:73.428" [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [1] "[7]: valid1's rmse:76.0852" [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [1] "[8]: valid1's rmse:78.4766" [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [1] "[9]: valid1's rmse:80.629" [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [1] "[10]: valid1's rmse:82.5661" [LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.000031 seconds. You can set `force_col_wise=true` to remove the overhead. [LightGBM] [Info] Total Bins 3 [LightGBM] [Info] Number of data points in the train set: 1000, number of used features: 1 [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [1] "[1]: valid1's rmse:55" [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [1] "[2]: valid1's rmse:59.5" [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [1] "[3]: valid1's rmse:63.55" [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [1] "[4]: valid1's rmse:67.195" [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [1] "[5]: valid1's rmse:70.4755" [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [1] "[6]: valid1's rmse:73.428" [LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.000013 seconds. You can set `force_col_wise=true` to remove the overhead. [LightGBM] [Info] Total Bins 35 [LightGBM] [Info] Number of data points in the train set: 100, number of used features: 1 [LightGBM] [Info] Start training from score 0.045019 [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [1] "[1]: valid1's constant_metric:0.2 valid1's increasing_metric:0.1" [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [1] "[2]: valid1's constant_metric:0.2 valid1's increasing_metric:0.2" [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [1] "[3]: valid1's constant_metric:0.2 valid1's increasing_metric:0.3" [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [1] "[4]: valid1's constant_metric:0.2 valid1's increasing_metric:0.4" [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [1] "[5]: valid1's constant_metric:0.2 valid1's increasing_metric:0.5" [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [1] "[6]: valid1's constant_metric:0.2 valid1's increasing_metric:0.6" [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [1] "[7]: valid1's constant_metric:0.2 valid1's increasing_metric:0.7" [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [1] "[8]: valid1's constant_metric:0.2 valid1's increasing_metric:0.8" [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [1] "[9]: valid1's constant_metric:0.2 valid1's increasing_metric:0.9" [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [1] "[10]: valid1's constant_metric:0.2 valid1's increasing_metric:1" [LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.000015 seconds. You can set `force_col_wise=true` to remove the overhead. [LightGBM] [Info] Total Bins 35 [LightGBM] [Info] Number of data points in the train set: 100, number of used features: 1 [LightGBM] [Info] Start training from score 0.045019 [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [1] "[1]: valid1's increasing_metric:1.1 valid1's constant_metric:0.2" [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [1] "[2]: valid1's increasing_metric:1.2 valid1's constant_metric:0.2" [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [1] "[3]: valid1's increasing_metric:1.3 valid1's constant_metric:0.2" [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [1] "[4]: valid1's increasing_metric:1.4 valid1's constant_metric:0.2" [LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.000014 seconds. You can set `force_col_wise=true` to remove the overhead. [LightGBM] [Info] Total Bins 35 [LightGBM] [Info] Number of data points in the train set: 100, number of used features: 1 [LightGBM] [Info] Start training from score 0.045019 [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [1] "[1]: valid1's increasing_metric:1.5 valid1's constant_metric:0.2" [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [1] "[2]: valid1's increasing_metric:1.6 valid1's constant_metric:0.2" [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [1] "[3]: valid1's increasing_metric:1.7 valid1's constant_metric:0.2" [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [1] "[4]: valid1's increasing_metric:1.8 valid1's constant_metric:0.2" [LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.000018 seconds. You can set `force_col_wise=true` to remove the overhead. [LightGBM] [Info] Total Bins 35 [LightGBM] [Info] Number of data points in the train set: 100, number of used features: 1 [LightGBM] [Info] Start training from score 0.045019 [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [1] "[1]: valid1's increasing_metric:1.9 valid1's constant_metric:0.2" [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [1] "[2]: valid1's increasing_metric:2 valid1's constant_metric:0.2" [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [1] "[3]: valid1's increasing_metric:2.1 valid1's constant_metric:0.2" [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [1] "[4]: valid1's increasing_metric:2.2 valid1's constant_metric:0.2" [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [1] "[5]: valid1's increasing_metric:2.3 valid1's constant_metric:0.2" [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [1] "[6]: valid1's increasing_metric:2.4 valid1's constant_metric:0.2" [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [1] "[7]: valid1's increasing_metric:2.5 valid1's constant_metric:0.2" [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [1] "[8]: valid1's increasing_metric:2.6 valid1's constant_metric:0.2" [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [1] "[9]: valid1's increasing_metric:2.7 valid1's constant_metric:0.2" [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [1] "[10]: valid1's increasing_metric:2.8 valid1's constant_metric:0.2" [LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.000014 seconds. You can set `force_col_wise=true` to remove the overhead. [LightGBM] [Info] Total Bins 35 [LightGBM] [Info] Number of data points in the train set: 100, number of used features: 1 [LightGBM] [Info] Start training from score 0.045019 [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [1] "[1]: valid1's rmse:1.10501 valid1's l2:1.22105 valid1's increasing_metric:2.9 valid1's rmse:1.10501 valid1's l2:1.22105 valid1's constant_metric:0.2" [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [1] "[2]: valid1's rmse:1.10335 valid1's l2:1.21738 valid1's increasing_metric:3 valid1's rmse:1.10335 valid1's l2:1.21738 valid1's constant_metric:0.2" [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [1] "[3]: valid1's rmse:1.10199 valid1's l2:1.21438 valid1's increasing_metric:3.1 valid1's rmse:1.10199 valid1's l2:1.21438 valid1's constant_metric:0.2" [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [1] "[4]: valid1's rmse:1.10198 valid1's l2:1.21436 valid1's increasing_metric:3.2 valid1's rmse:1.10198 valid1's l2:1.21436 valid1's constant_metric:0.2" [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [1] "[5]: valid1's rmse:1.10128 valid1's l2:1.21282 valid1's increasing_metric:3.3 valid1's rmse:1.10128 valid1's l2:1.21282 valid1's constant_metric:0.2" [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [1] "[6]: valid1's rmse:1.10101 valid1's l2:1.21222 valid1's increasing_metric:3.4 valid1's rmse:1.10101 valid1's l2:1.21222 valid1's constant_metric:0.2" [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [1] "[7]: valid1's rmse:1.10065 valid1's l2:1.21143 valid1's increasing_metric:3.5 valid1's rmse:1.10065 valid1's l2:1.21143 valid1's constant_metric:0.2" [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [1] "[8]: valid1's rmse:1.10011 valid1's l2:1.21025 valid1's increasing_metric:3.6 valid1's rmse:1.10011 valid1's l2:1.21025 valid1's constant_metric:0.2" [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [1] "[9]: valid1's rmse:1.09999 valid1's l2:1.20997 valid1's increasing_metric:3.7 valid1's rmse:1.09999 valid1's l2:1.20997 valid1's constant_metric:0.2" [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [1] "[10]: valid1's rmse:1.09954 valid1's l2:1.20898 valid1's increasing_metric:3.8 valid1's rmse:1.09954 valid1's l2:1.20898 valid1's constant_metric:0.2" [LightGBM] [Info] Number of positive: 66, number of negative: 54 [LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.000012 seconds. You can set `force_col_wise=true` to remove the overhead. [LightGBM] [Info] Total Bins 42 [LightGBM] [Info] Number of data points in the train set: 120, number of used features: 1 [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.550000 -> initscore=0.200671 [LightGBM] [Info] Start training from score 0.200671 [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [1] "[1]: valid1's binary_error:0.486486 valid1's binary_logloss:0.693255" [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [1] "[2]: valid1's binary_error:0.486486 valid1's binary_logloss:0.691495" [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [1] "[3]: valid1's binary_error:0.486486 valid1's binary_logloss:0.69009" [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [1] "[4]: valid1's binary_error:0.432432 valid1's binary_logloss:0.688968" [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [1] "[5]: valid1's binary_error:0.432432 valid1's binary_logloss:0.688534" [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [1] "[6]: valid1's binary_error:0.432432 valid1's binary_logloss:0.689883" [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [1] "[7]: valid1's binary_error:0.432432 valid1's binary_logloss:0.689641" [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [1] "[8]: valid1's binary_error:0.432432 valid1's binary_logloss:0.689532" [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [1] "[9]: valid1's binary_error:0.432432 valid1's binary_logloss:0.691066" [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [1] "[10]: valid1's binary_error:0.432432 valid1's binary_logloss:0.690653" [LightGBM] [Info] Number of positive: 66, number of negative: 54 [LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.000014 seconds. You can set `force_col_wise=true` to remove the overhead. [LightGBM] [Info] Total Bins 42 [LightGBM] [Info] Number of data points in the train set: 120, number of used features: 1 [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.550000 -> initscore=0.200671 [LightGBM] [Info] Start training from score 0.200671 [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [1] "[1]: valid1's binary_logloss:0.693255" [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [1] "[2]: valid1's binary_logloss:0.691495" [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [1] "[3]: valid1's binary_logloss:0.69009" [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [1] "[4]: valid1's binary_logloss:0.688968" [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [1] "[5]: valid1's binary_logloss:0.688534" [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [1] "[6]: valid1's binary_logloss:0.689883" [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [1] "[7]: valid1's binary_logloss:0.689641" [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [1] "[8]: valid1's binary_logloss:0.689532" [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [1] "[9]: valid1's binary_logloss:0.691066" [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [1] "[10]: valid1's binary_logloss:0.690653" [LightGBM] [Info] Number of positive: 66, number of negative: 54 [LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.000014 seconds. You can set `force_col_wise=true` to remove the overhead. [LightGBM] [Info] Total Bins 42 [LightGBM] [Info] Number of data points in the train set: 120, number of used features: 1 [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.550000 -> initscore=0.200671 [LightGBM] [Info] Start training from score 0.200671 [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [1] "[1]: valid1's binary_error:0.486486 valid1's binary_logloss:0.693255" [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [1] "[2]: valid1's binary_error:0.486486 valid1's binary_logloss:0.691495" [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [1] "[3]: valid1's binary_error:0.486486 valid1's binary_logloss:0.69009" [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [1] "[4]: valid1's binary_error:0.432432 valid1's binary_logloss:0.688968" [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [1] "[5]: valid1's binary_error:0.432432 valid1's binary_logloss:0.688534" [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [1] "[6]: valid1's binary_error:0.432432 valid1's binary_logloss:0.689883" [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [1] "[7]: valid1's binary_error:0.432432 valid1's binary_logloss:0.689641" [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [1] "[8]: valid1's binary_error:0.432432 valid1's binary_logloss:0.689532" [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [1] "[9]: valid1's binary_error:0.432432 valid1's binary_logloss:0.691066" [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [1] "[10]: valid1's binary_error:0.432432 valid1's binary_logloss:0.690653" [LightGBM] [Info] Number of positive: 66, number of negative: 54 [LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.000013 seconds. You can set `force_col_wise=true` to remove the overhead. [LightGBM] [Info] Total Bins 42 [LightGBM] [Info] Number of data points in the train set: 120, number of used features: 1 [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.550000 -> initscore=0.200671 [LightGBM] [Info] Start training from score 0.200671 [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [1] "[1]: valid1's binary_logloss:0.693255" [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [1] "[2]: valid1's binary_logloss:0.691495" [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [1] "[3]: valid1's binary_logloss:0.69009" [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [1] "[4]: valid1's binary_logloss:0.688968" [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [1] "[5]: valid1's binary_logloss:0.688534" [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [1] "[6]: valid1's binary_logloss:0.689883" [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [1] "[7]: valid1's binary_logloss:0.689641" [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [1] "[8]: valid1's binary_logloss:0.689532" [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [1] "[9]: valid1's binary_logloss:0.691066" [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [1] "[10]: valid1's binary_logloss:0.690653" [LightGBM] [Info] Number of positive: 66, number of negative: 54 [LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.000015 seconds. You can set `force_col_wise=true` to remove the overhead. [LightGBM] [Info] Total Bins 42 [LightGBM] [Info] Number of data points in the train set: 120, number of used features: 1 [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.550000 -> initscore=0.200671 [LightGBM] [Info] Start training from score 0.200671 [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [1] "[1]: valid1's binary_error:0.486486 valid1's binary_logloss:0.693255" [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [1] "[2]: valid1's binary_error:0.486486 valid1's binary_logloss:0.691495" [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [1] "[3]: valid1's binary_error:0.486486 valid1's binary_logloss:0.69009" [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [1] "[4]: valid1's binary_error:0.432432 valid1's binary_logloss:0.688968" [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [1] "[5]: valid1's binary_error:0.432432 valid1's binary_logloss:0.688534" [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [1] "[6]: valid1's binary_error:0.432432 valid1's binary_logloss:0.689883" [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [1] "[7]: valid1's binary_error:0.432432 valid1's binary_logloss:0.689641" [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [1] "[8]: valid1's binary_error:0.432432 valid1's binary_logloss:0.689532" [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [1] "[9]: valid1's binary_error:0.432432 valid1's binary_logloss:0.691066" [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [1] "[10]: valid1's binary_error:0.432432 valid1's binary_logloss:0.690653" [LightGBM] [Info] Number of positive: 66, number of negative: 54 [LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.000014 seconds. You can set `force_col_wise=true` to remove the overhead. [LightGBM] [Info] Total Bins 42 [LightGBM] [Info] Number of data points in the train set: 120, number of used features: 1 [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.550000 -> initscore=0.200671 [LightGBM] [Info] Start training from score 0.200671 [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [1] "[1]: valid1's constant_metric:0.2" [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [1] "[2]: valid1's constant_metric:0.2" [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [1] "[3]: valid1's constant_metric:0.2" [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [1] "[4]: valid1's constant_metric:0.2" [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [1] "[5]: valid1's constant_metric:0.2" [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [1] "[6]: valid1's constant_metric:0.2" [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [1] "[7]: valid1's constant_metric:0.2" [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [1] "[8]: valid1's constant_metric:0.2" [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [1] "[9]: valid1's constant_metric:0.2" [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [1] "[10]: valid1's constant_metric:0.2" [LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.000029 seconds. You can set `force_col_wise=true` to remove the overhead. [LightGBM] [Info] Total Bins 3 [LightGBM] [Info] Number of data points in the train set: 1000, number of used features: 1 [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [1] "[1]: valid1's mape:1.1 valid1's rmse:55 valid1's l1:55" [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [1] "[2]: valid1's mape:1.19 valid1's rmse:59.5 valid1's l1:59.5" [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [1] "[3]: valid1's mape:1.271 valid1's rmse:63.55 valid1's l1:63.55" [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [1] "[4]: valid1's mape:1.3439 valid1's rmse:67.195 valid1's l1:67.195" [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [1] "[5]: valid1's mape:1.40951 valid1's rmse:70.4755 valid1's l1:70.4755" [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [1] "[6]: valid1's mape:1.46856 valid1's rmse:73.428 valid1's l1:73.428" ── Skip (test_basic.R:1171:3): lgb.train() supports non-ASCII feature names ──── Reason: UTF-8 feature names are not fully supported in the R package [LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.000029 seconds. You can set `force_col_wise=true` to remove the overhead. [LightGBM] [Info] Total Bins 3 [LightGBM] [Info] Number of data points in the train set: 1000, number of used features: 1 [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [1] "[1]: valid1's rmse:125 valid2's rmse:98.1071" [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [1] "[2]: valid1's rmse:87.5 valid2's rmse:62.5" [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [1] "[3]: valid1's rmse:106.25 valid2's rmse:80.0878" [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [1] "[4]: valid1's rmse:96.875 valid2's rmse:71.2198" [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [1] "[5]: valid1's rmse:101.562 valid2's rmse:75.6386" [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [1] "[6]: valid1's rmse:99.2188 valid2's rmse:73.425" [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [1] "[7]: valid1's rmse:100.391 valid2's rmse:74.5308" [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [1] "[8]: valid1's rmse:99.8047 valid2's rmse:73.9777" [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [1] "[9]: valid1's rmse:100.098 valid2's rmse:74.2542" [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [1] "[10]: valid1's rmse:99.9512 valid2's rmse:74.1159" [LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.000030 seconds. You can set `force_col_wise=true` to remove the overhead. [LightGBM] [Info] Total Bins 3 [LightGBM] [Info] Number of data points in the train set: 1000, number of used features: 1 [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [1] "[1]: train's rmse:25 valid1's rmse:125 valid2's rmse:98.1071" [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [1] "[2]: train's rmse:12.5 valid1's rmse:87.5 valid2's rmse:62.5" [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [1] "[3]: train's rmse:6.25 valid1's rmse:106.25 valid2's rmse:80.0878" [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [1] "[4]: train's rmse:3.125 valid1's rmse:96.875 valid2's rmse:71.2198" [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [1] "[5]: train's rmse:1.5625 valid1's rmse:101.562 valid2's rmse:75.6386" [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [1] "[6]: train's rmse:0.78125 valid1's rmse:99.2188 valid2's rmse:73.425" [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [1] "[7]: train's rmse:0.390625 valid1's rmse:100.391 valid2's rmse:74.5308" [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [1] "[8]: train's rmse:0.195312 valid1's rmse:99.8047 valid2's rmse:73.9777" [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [1] "[9]: train's rmse:0.0976562 valid1's rmse:100.098 valid2's rmse:74.2542" [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [1] "[10]: train's rmse:0.0488281 valid1's rmse:99.9512 valid2's rmse:74.1159" [LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.000032 seconds. You can set `force_col_wise=true` to remove the overhead. [LightGBM] [Info] Total Bins 3 [LightGBM] [Info] Number of data points in the train set: 1000, number of used features: 1 [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [1] "[1]: train's rmse:25 valid1's rmse:125 valid2's rmse:98.1071" [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [1] "[2]: train's rmse:12.5 valid1's rmse:87.5 valid2's rmse:62.5" [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [1] "[3]: train's rmse:6.25 valid1's rmse:106.25 valid2's rmse:80.0878" [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [1] "[4]: train's rmse:3.125 valid1's rmse:96.875 valid2's rmse:71.2198" [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [1] "[5]: train's rmse:1.5625 valid1's rmse:101.562 valid2's rmse:75.6386" [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [1] "[6]: train's rmse:0.78125 valid1's rmse:99.2188 valid2's rmse:73.425" [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [1] "[7]: train's rmse:0.390625 valid1's rmse:100.391 valid2's rmse:74.5308" [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [1] "[8]: train's rmse:0.195312 valid1's rmse:99.8047 valid2's rmse:73.9777" [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [1] "[9]: train's rmse:0.0976562 valid1's rmse:100.098 valid2's rmse:74.2542" [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [1] "[10]: train's rmse:0.0488281 valid1's rmse:99.9512 valid2's rmse:74.1159" [LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.000032 seconds. You can set `force_col_wise=true` to remove the overhead. [LightGBM] [Info] Total Bins 3 [LightGBM] [Info] Number of data points in the train set: 1000, number of used features: 1 [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [1] "[1]: train's rmse:25 valid1's rmse:125 valid2's rmse:98.1071" [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [1] "[2]: train's rmse:12.5 valid1's rmse:87.5 valid2's rmse:62.5" [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [1] "[3]: train's rmse:6.25 valid1's rmse:106.25 valid2's rmse:80.0878" [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [1] "[4]: train's rmse:3.125 valid1's rmse:96.875 valid2's rmse:71.2198" [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [1] "[5]: train's rmse:1.5625 valid1's rmse:101.562 valid2's rmse:75.6386" [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [1] "[6]: train's rmse:0.78125 valid1's rmse:99.2188 valid2's rmse:73.425" [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [1] "[7]: train's rmse:0.390625 valid1's rmse:100.391 valid2's rmse:74.5308" [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [1] "[8]: train's rmse:0.195312 valid1's rmse:99.8047 valid2's rmse:73.9777" [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [1] "[9]: train's rmse:0.0976562 valid1's rmse:100.098 valid2's rmse:74.2542" [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [1] "[10]: train's rmse:0.0488281 valid1's rmse:99.9512 valid2's rmse:74.1159" [LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.000032 seconds. You can set `force_col_wise=true` to remove the overhead. [LightGBM] [Info] Total Bins 3 [LightGBM] [Info] Number of data points in the train set: 1000, number of used features: 1 [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [1] "[1]: something-random-we-would-not-hardcode's rmse:25 valid1's rmse:125 valid2's rmse:98.1071" [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [1] "[2]: something-random-we-would-not-hardcode's rmse:12.5 valid1's rmse:87.5 valid2's rmse:62.5" [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [1] "[3]: something-random-we-would-not-hardcode's rmse:6.25 valid1's rmse:106.25 valid2's rmse:80.0878" [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [1] "[4]: something-random-we-would-not-hardcode's rmse:3.125 valid1's rmse:96.875 valid2's rmse:71.2198" [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [1] "[5]: something-random-we-would-not-hardcode's rmse:1.5625 valid1's rmse:101.562 valid2's rmse:75.6386" [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [1] "[6]: something-random-we-would-not-hardcode's rmse:0.78125 valid1's rmse:99.2188 valid2's rmse:73.425" [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [1] "[7]: something-random-we-would-not-hardcode's rmse:0.390625 valid1's rmse:100.391 valid2's rmse:74.5308" [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [1] "[8]: something-random-we-would-not-hardcode's rmse:0.195312 valid1's rmse:99.8047 valid2's rmse:73.9777" [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [1] "[9]: something-random-we-would-not-hardcode's rmse:0.0976562 valid1's rmse:100.098 valid2's rmse:74.2542" [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [1] "[10]: something-random-we-would-not-hardcode's rmse:0.0488281 valid1's rmse:99.9512 valid2's rmse:74.1159" [LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.000031 seconds. You can set `force_col_wise=true` to remove the overhead. [LightGBM] [Info] Total Bins 3 [LightGBM] [Info] Number of data points in the train set: 1000, number of used features: 1 [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [1] "[1]: train's rmse:25" [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [1] "[2]: train's rmse:12.5" [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [1] "[3]: train's rmse:6.25" [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [1] "[4]: train's rmse:3.125" [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [1] "[5]: train's rmse:1.5625" [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [1] "[6]: train's rmse:0.78125" [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [1] "[7]: train's rmse:0.390625" [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [1] "[8]: train's rmse:0.195312" [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [1] "[9]: train's rmse:0.0976562" [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [1] "[10]: train's rmse:0.0488281" [LightGBM] [Info] Number of positive: 500, number of negative: 500 [LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.000029 seconds. You can set `force_col_wise=true` to remove the overhead. [LightGBM] [Info] Total Bins 255 [LightGBM] [Info] Number of data points in the train set: 1000, number of used features: 1 [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.500000 -> initscore=0.000000 [1] "[1]: something-random-we-would-not-hardcode's auc:0.58136 valid1's auc:0.429487" [1] "[2]: something-random-we-would-not-hardcode's auc:0.599008 valid1's auc:0.266026" [1] "[3]: something-random-we-would-not-hardcode's auc:0.6328 valid1's auc:0.349359" [1] "[4]: something-random-we-would-not-hardcode's auc:0.655136 valid1's auc:0.394231" [1] "[5]: something-random-we-would-not-hardcode's auc:0.655408 valid1's auc:0.419872" [1] "[6]: something-random-we-would-not-hardcode's auc:0.678784 valid1's auc:0.336538" [1] "[7]: something-random-we-would-not-hardcode's auc:0.682176 valid1's auc:0.416667" [1] "[8]: something-random-we-would-not-hardcode's auc:0.698032 valid1's auc:0.394231" [1] "[9]: something-random-we-would-not-hardcode's auc:0.712672 valid1's auc:0.445513" [1] "[10]: something-random-we-would-not-hardcode's auc:0.723024 valid1's auc:0.471154" [LightGBM] [Info] Number of positive: 50, number of negative: 39 [LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.000011 seconds. You can set `force_col_wise=true` to remove the overhead. [LightGBM] [Info] Total Bins 42 [LightGBM] [Info] Number of data points in the train set: 89, number of used features: 1 [LightGBM] [Info] Number of positive: 49, number of negative: 41 [LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.000008 seconds. You can set `force_col_wise=true` to remove the overhead. [LightGBM] [Info] Total Bins 42 [LightGBM] [Info] Number of data points in the train set: 90, number of used features: 1 [LightGBM] [Info] Number of positive: 53, number of negative: 38 [LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.000008 seconds. You can set `force_col_wise=true` to remove the overhead. [LightGBM] [Info] Total Bins 42 [LightGBM] [Info] Number of data points in the train set: 91, number of used features: 1 [LightGBM] [Info] Number of positive: 46, number of negative: 44 [LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.000008 seconds. You can set `force_col_wise=true` to remove the overhead. [LightGBM] [Info] Total Bins 42 [LightGBM] [Info] Number of data points in the train set: 90, number of used features: 1 [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.561798 -> initscore=0.248461 [LightGBM] [Info] Start training from score 0.248461 [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.544444 -> initscore=0.178248 [LightGBM] [Info] Start training from score 0.178248 [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.582418 -> initscore=0.332706 [LightGBM] [Info] Start training from score 0.332706 [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.511111 -> initscore=0.044452 [LightGBM] [Info] Start training from score 0.044452 [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [1] "[1]: valid's binary_error:0.500565+0.0460701 valid's binary_logloss:0.701123+0.0155541" [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [1] "[2]: valid's binary_error:0.500565+0.0460701 valid's binary_logloss:0.70447+0.0152787" [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [1] "[3]: valid's binary_error:0.500565+0.0460701 valid's binary_logloss:0.706572+0.0162531" [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [1] "[4]: valid's binary_error:0.500565+0.0460701 valid's binary_logloss:0.709214+0.0165672" [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [1] "[5]: valid's binary_error:0.500565+0.0460701 valid's binary_logloss:0.710652+0.0172198" [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [1] "[6]: valid's binary_error:0.500565+0.0460701 valid's binary_logloss:0.713091+0.0176604" [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [1] "[7]: valid's binary_error:0.508899+0.0347887 valid's binary_logloss:0.714842+0.0184267" [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [1] "[8]: valid's binary_error:0.508899+0.0347887 valid's binary_logloss:0.714719+0.0178927" [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [1] "[9]: valid's binary_error:0.508899+0.0347887 valid's binary_logloss:0.717162+0.0181993" [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [1] "[10]: valid's binary_error:0.508899+0.0347887 valid's binary_logloss:0.716577+0.0180201" [LightGBM] [Info] Number of positive: 45, number of negative: 35 [LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.000011 seconds. You can set `force_col_wise=true` to remove the overhead. [LightGBM] [Info] Total Bins 42 [LightGBM] [Info] Number of data points in the train set: 80, number of used features: 1 [LightGBM] [Info] Number of positive: 40, number of negative: 40 [LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.000008 seconds. You can set `force_col_wise=true` to remove the overhead. [LightGBM] [Info] Total Bins 42 [LightGBM] [Info] Number of data points in the train set: 80, number of used features: 1 [LightGBM] [Info] Number of positive: 47, number of negative: 33 [LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.000008 seconds. You can set `force_col_wise=true` to remove the overhead. [LightGBM] [Info] Total Bins 42 [LightGBM] [Info] Number of data points in the train set: 80, number of used features: 1 [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.562500 -> initscore=0.251314 [LightGBM] [Info] Start training from score 0.251314 [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.500000 -> initscore=0.000000 [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.587500 -> initscore=0.353640 [LightGBM] [Info] Start training from score 0.353640 [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [1] "[1]: valid's constant_metric:0.2+0" [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [1] "[2]: valid's constant_metric:0.2+0" [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [1] "[3]: valid's constant_metric:0.2+0" [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [1] "[4]: valid's constant_metric:0.2+0" [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [1] "[5]: valid's constant_metric:0.2+0" [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [1] "[6]: valid's constant_metric:0.2+0" [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [1] "[7]: valid's constant_metric:0.2+0" [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [1] "[8]: valid's constant_metric:0.2+0" [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [1] "[9]: valid's constant_metric:0.2+0" [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [1] "[10]: valid's constant_metric:0.2+0" [LightGBM] [Warning] Unknown parameter: c4154e8> [LightGBM] [Warning] Unknown parameter: valids [LightGBM] [Warning] Unknown parameter: c4154e8> [LightGBM] [Warning] Unknown parameter: valids [LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.000013 seconds. You can set `force_col_wise=true` to remove the overhead. [LightGBM] [Info] Total Bins 35 [LightGBM] [Info] Number of data points in the train set: 80, number of used features: 1 [LightGBM] [Warning] Unknown parameter: c4154e8> [LightGBM] [Warning] Unknown parameter: valids [LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.000012 seconds. You can set `force_col_wise=true` to remove the overhead. [LightGBM] [Info] Total Bins 35 [LightGBM] [Info] Number of data points in the train set: 80, number of used features: 1 [LightGBM] [Warning] Unknown parameter: c4154e8> [LightGBM] [Warning] Unknown parameter: valids [LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.000012 seconds. You can set `force_col_wise=true` to remove the overhead. [LightGBM] [Info] Total Bins 35 [LightGBM] [Info] Number of data points in the train set: 80, number of used features: 1 [LightGBM] [Warning] Unknown parameter: c4154e8> [LightGBM] [Warning] Unknown parameter: valids [LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.000010 seconds. You can set `force_col_wise=true` to remove the overhead. [LightGBM] [Info] Total Bins 35 [LightGBM] [Info] Number of data points in the train set: 80, number of used features: 1 [LightGBM] [Warning] Unknown parameter: c4154e8> [LightGBM] [Warning] Unknown parameter: valids [LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.000010 seconds. You can set `force_col_wise=true` to remove the overhead. [LightGBM] [Info] Total Bins 35 [LightGBM] [Info] Number of data points in the train set: 80, number of used features: 1 [LightGBM] [Info] Start training from score 0.024388 [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [LightGBM] [Info] Start training from score 0.005573 [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [LightGBM] [Info] Start training from score 0.039723 [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [LightGBM] [Info] Start training from score 0.029700 [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [LightGBM] [Info] Start training from score 0.125712 [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [1] "[1]: valid's increasing_metric:4.1+0.141421 valid's constant_metric:0.2+0" [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [1] "[2]: valid's increasing_metric:4.6+0.141421 valid's constant_metric:0.2+0" [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [1] "[3]: valid's increasing_metric:5.1+0.141421 valid's constant_metric:0.2+0" [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [1] "[4]: valid's increasing_metric:5.6+0.141421 valid's constant_metric:0.2+0" [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [1] "[5]: valid's increasing_metric:6.1+0.141421 valid's constant_metric:0.2+0" [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [1] "[6]: valid's increasing_metric:6.6+0.141421 valid's constant_metric:0.2+0" [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [1] "[7]: valid's increasing_metric:7.1+0.141421 valid's constant_metric:0.2+0" [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [1] "[8]: valid's increasing_metric:7.6+0.141421 valid's constant_metric:0.2+0" [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [1] "[9]: valid's increasing_metric:8.1+0.141421 valid's constant_metric:0.2+0" [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [1] "[10]: valid's increasing_metric:8.6+0.141421 valid's constant_metric:0.2+0" [LightGBM] [Warning] Unknown parameter: c4154e8> [LightGBM] [Warning] Unknown parameter: valids [LightGBM] [Warning] Unknown parameter: c4154e8> [LightGBM] [Warning] Unknown parameter: valids [LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.000013 seconds. You can set `force_col_wise=true` to remove the overhead. [LightGBM] [Info] Total Bins 35 [LightGBM] [Info] Number of data points in the train set: 80, number of used features: 1 [LightGBM] [Warning] Unknown parameter: c4154e8> [LightGBM] [Warning] Unknown parameter: valids [LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.000009 seconds. You can set `force_col_wise=true` to remove the overhead. [LightGBM] [Info] Total Bins 35 [LightGBM] [Info] Number of data points in the train set: 80, number of used features: 1 [LightGBM] [Warning] Unknown parameter: c4154e8> [LightGBM] [Warning] Unknown parameter: valids [LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.000008 seconds. You can set `force_col_wise=true` to remove the overhead. [LightGBM] [Info] Total Bins 35 [LightGBM] [Info] Number of data points in the train set: 80, number of used features: 1 [LightGBM] [Warning] Unknown parameter: c4154e8> [LightGBM] [Warning] Unknown parameter: valids [LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.000008 seconds. You can set `force_col_wise=true` to remove the overhead. [LightGBM] [Info] Total Bins 35 [LightGBM] [Info] Number of data points in the train set: 80, number of used features: 1 [LightGBM] [Warning] Unknown parameter: c4154e8> [LightGBM] [Warning] Unknown parameter: valids [LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.000009 seconds. You can set `force_col_wise=true` to remove the overhead. [LightGBM] [Info] Total Bins 35 [LightGBM] [Info] Number of data points in the train set: 80, number of used features: 1 [LightGBM] [Info] Start training from score 0.024388 [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [LightGBM] [Info] Start training from score 0.005573 [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [LightGBM] [Info] Start training from score 0.039723 [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [LightGBM] [Info] Start training from score 0.029700 [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [LightGBM] [Info] Start training from score 0.125712 [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [1] "[1]: valid's constant_metric:0.2+0 valid's increasing_metric:9.1+0.141421" [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [1] "[2]: valid's constant_metric:0.2+0 valid's increasing_metric:9.6+0.141421" [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [1] "[3]: valid's constant_metric:0.2+0 valid's increasing_metric:10.1+0.141421" [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [1] "[4]: valid's constant_metric:0.2+0 valid's increasing_metric:10.6+0.141421" [LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001272 seconds. You can set `force_row_wise=true` to remove the overhead. And if memory is not enough, you can set `force_col_wise=true`. [LightGBM] [Info] Total Bins 214 [LightGBM] [Info] Number of data points in the train set: 6513, number of used features: 107 [LightGBM] [Info] Start training from score 0.482113 [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [1] "[1]: train's l2:0.24804" [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [1] "[2]: train's l2:0.246711" [LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001295 seconds. You can set `force_row_wise=true` to remove the overhead. And if memory is not enough, you can set `force_col_wise=true`. [LightGBM] [Info] Total Bins 214 [LightGBM] [Info] Number of data points in the train set: 6513, number of used features: 107 [LightGBM] [Info] Start training from score 0.482113 [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [1] "[1]: train's l2:0.24804" [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [1] "[2]: train's l2:0.246711" [LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001334 seconds. You can set `force_row_wise=true` to remove the overhead. And if memory is not enough, you can set `force_col_wise=true`. [LightGBM] [Info] Total Bins 214 [LightGBM] [Info] Number of data points in the train set: 6513, number of used features: 107 [LightGBM] [Info] Start training from score 0.482113 [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [1] "[1]: train's l2:0.24804" [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [1] "[2]: train's l2:0.246711" [LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001281 seconds. You can set `force_row_wise=true` to remove the overhead. And if memory is not enough, you can set `force_col_wise=true`. [LightGBM] [Info] Total Bins 214 [LightGBM] [Info] Number of data points in the train set: 6513, number of used features: 107 [LightGBM] [Info] Start training from score 0.482113 [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [1] "[1]: train's l2:0.24804" [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [1] "[2]: train's l2:0.246711" [LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001278 seconds. You can set `force_row_wise=true` to remove the overhead. And if memory is not enough, you can set `force_col_wise=true`. [LightGBM] [Info] Total Bins 214 [LightGBM] [Info] Number of data points in the train set: 6513, number of used features: 107 [LightGBM] [Info] Start training from score 0.482113 [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [1] "[1]: train's l2:0.24804" [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [1] "[2]: train's l2:0.246711" [LightGBM] [Warning] Using self-defined objective function [LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001209 seconds. You can set `force_row_wise=true` to remove the overhead. And if memory is not enough, you can set `force_col_wise=true`. [LightGBM] [Info] Total Bins 214 [LightGBM] [Info] Number of data points in the train set: 6513, number of used features: 107 [LightGBM] [Warning] Using self-defined objective function [1] "[1]: train's auc:0.994987 train's error:0.00598802 eval's auc:0.995243 eval's error:0.00558659" [1] "[2]: train's auc:0.99512 train's error:0.00307078 eval's auc:0.995237 eval's error:0.00248293" [1] "[3]: train's auc:0.99009 train's error:0.00598802 eval's auc:0.98843 eval's error:0.00558659" [1] "[4]: train's auc:0.999889 train's error:0.00168893 eval's auc:1 eval's error:0.000620732" [1] "[5]: train's auc:1 train's error:0 eval's auc:1 eval's error:0" [1] "[6]: train's auc:1 train's error:0 eval's auc:1 eval's error:0" [1] "[7]: train's auc:1 train's error:0 eval's auc:1 eval's error:0" [1] "[8]: train's auc:1 train's error:0 eval's auc:1 eval's error:0" [1] "[9]: train's auc:1 train's error:0 eval's auc:1 eval's error:0" [1] "[10]: train's auc:1 train's error:0 eval's auc:1 eval's error:0" [LightGBM] [Warning] Using self-defined objective function [LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001273 seconds. You can set `force_row_wise=true` to remove the overhead. And if memory is not enough, you can set `force_col_wise=true`. [LightGBM] [Info] Total Bins 214 [LightGBM] [Info] Number of data points in the train set: 6513, number of used features: 107 [LightGBM] [Warning] Using self-defined objective function [1] "[1]: train's error:0.00598802 eval's error:0.00558659" [1] "[2]: train's error:0.00307078 eval's error:0.00248293" [1] "[3]: train's error:0.00598802 eval's error:0.00558659" [1] "[4]: train's error:0.00168893 eval's error:0.000620732" [LightGBM] [Info] Saving data to binary file /export/home/X7hzECR/Rtemp/Rtmpd5KsmG/working_dir/Rtmpi3f75n/lgb.Dataset_7186408c4917 [LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.000285 seconds. You can set `force_col_wise=true` to remove the overhead. [LightGBM] [Info] Total Bins 32 [LightGBM] [Info] Number of data points in the train set: 6000, number of used features: 16 ── FAILURE (test_learning_to_rank.R:49:5): learning-to-rank with lgb.train() wor abs(eval_results[[2L]][["value"]] - 0.745986) < TOLERANCE is not TRUE `actual`: FALSE `expected`: TRUE ── FAILURE (test_learning_to_rank.R:50:5): learning-to-rank with lgb.train() wor abs(eval_results[[3L]][["value"]] - 0.7351959) < TOLERANCE is not TRUE `actual`: FALSE `expected`: TRUE [LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.000217 seconds. You can set `force_col_wise=true` to remove the overhead. [LightGBM] [Info] Total Bins 40 [LightGBM] [Info] Number of data points in the train set: 4500, number of used features: 20 [LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.000217 seconds. You can set `force_col_wise=true` to remove the overhead. [LightGBM] [Info] Total Bins 40 [LightGBM] [Info] Number of data points in the train set: 4500, number of used features: 20 [LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.000214 seconds. You can set `force_col_wise=true` to remove the overhead. [LightGBM] [Info] Total Bins 40 [LightGBM] [Info] Number of data points in the train set: 4500, number of used features: 20 [LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.000216 seconds. You can set `force_col_wise=true` to remove the overhead. [LightGBM] [Info] Total Bins 40 [LightGBM] [Info] Number of data points in the train set: 4500, number of used features: 20 [1] "[1]: valid's ndcg@1:0.675+0.0829156 valid's ndcg@2:0.655657+0.0625302 valid's ndcg@3:0.648464+0.0613335" [1] "[2]: valid's ndcg@1:0.725+0.108972 valid's ndcg@2:0.666972+0.131409 valid's ndcg@3:0.657124+0.130448" [1] "[3]: valid's ndcg@1:0.65+0.111803 valid's ndcg@2:0.630657+0.125965 valid's ndcg@3:0.646928+0.15518" [1] "[4]: valid's ndcg@1:0.725+0.0829156 valid's ndcg@2:0.647629+0.120353 valid's ndcg@3:0.654052+0.129471" [1] "[5]: valid's ndcg@1:0.75+0.165831 valid's ndcg@2:0.662958+0.142544 valid's ndcg@3:0.648186+0.130213" [1] "[6]: valid's ndcg@1:0.725+0.129904 valid's ndcg@2:0.647629+0.108136 valid's ndcg@3:0.648186+0.106655" [1] "[7]: valid's ndcg@1:0.75+0.165831 valid's ndcg@2:0.653287+0.14255 valid's ndcg@3:0.64665+0.119557" [1] "[8]: valid's ndcg@1:0.725+0.129904 valid's ndcg@2:0.637958+0.123045 valid's ndcg@3:0.64665+0.119557" [1] "[9]: valid's ndcg@1:0.75+0.15 valid's ndcg@2:0.711315+0.101634 valid's ndcg@3:0.702794+0.100252" [1] "[10]: valid's ndcg@1:0.75+0.165831 valid's ndcg@2:0.682301+0.117876 valid's ndcg@3:0.66299+0.121243" ── FAILURE (test_learning_to_rank.R:125:5): learning-to-rank with lgb.cv() works all(...) is not TRUE `actual`: FALSE `expected`: TRUE ── FAILURE (test_learning_to_rank.R:131:5): learning-to-rank with lgb.cv() works all(...) is not TRUE `actual`: FALSE `expected`: TRUE [LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001347 seconds. You can set `force_row_wise=true` to remove the overhead. And if memory is not enough, you can set `force_col_wise=true`. [LightGBM] [Info] Total Bins 232 [LightGBM] [Info] Number of data points in the train set: 6513, number of used features: 116 [LightGBM] [Info] Start training from score 0.482113 [LightGBM] [Warning] No further splits with positive gain, best gain: 0.000000 [1] "[1]: test's l2:6.44165e-17" [LightGBM] [Warning] No further splits with positive gain, best gain: 0.000000 [LightGBM] [Warning] Stopped training because there are no more leaves that meet the split requirements [1] "[2]: test's l2:6.44165e-17" [LightGBM] [Warning] No further splits with positive gain, best gain: 0.000000 [LightGBM] [Warning] Stopped training because there are no more leaves that meet the split requirements [1] "[3]: test's l2:6.44165e-17" [LightGBM] [Warning] No further splits with positive gain, best gain: 0.000000 [LightGBM] [Warning] Stopped training because there are no more leaves that meet the split requirements [1] "[4]: test's l2:6.44165e-17" [LightGBM] [Warning] No further splits with positive gain, best gain: 0.000000 [LightGBM] [Warning] Stopped training because there are no more leaves that meet the split requirements [1] "[5]: test's l2:6.44165e-17" [LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001320 seconds. You can set `force_row_wise=true` to remove the overhead. And if memory is not enough, you can set `force_col_wise=true`. [LightGBM] [Info] Total Bins 232 [LightGBM] [Info] Number of data points in the train set: 6513, number of used features: 116 [LightGBM] [Info] Start training from score 0.482113 [LightGBM] [Warning] No further splits with positive gain, best gain: 0.000000 [1] "[1]: test's l2:6.44165e-17" [LightGBM] [Warning] No further splits with positive gain, best gain: 0.000000 [LightGBM] [Warning] Stopped training because there are no more leaves that meet the split requirements [1] "[2]: test's l2:6.44165e-17" [LightGBM] [Warning] No further splits with positive gain, best gain: 0.000000 [LightGBM] [Warning] Stopped training because there are no more leaves that meet the split requirements [1] "[3]: test's l2:6.44165e-17" [LightGBM] [Warning] No further splits with positive gain, best gain: 0.000000 [LightGBM] [Warning] Stopped training because there are no more leaves that meet the split requirements [1] "[4]: test's l2:6.44165e-17" [LightGBM] [Warning] No further splits with positive gain, best gain: 0.000000 [LightGBM] [Warning] Stopped training because there are no more leaves that meet the split requirements [1] "[5]: test's l2:6.44165e-17" [LightGBM] [Info] Number of positive: 3140, number of negative: 3373 [LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001212 seconds. You can set `force_row_wise=true` to remove the overhead. And if memory is not enough, you can set `force_col_wise=true`. [LightGBM] [Info] Total Bins 214 [LightGBM] [Info] Number of data points in the train set: 6513, number of used features: 107 [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.482113 -> initscore=-0.071580 [LightGBM] [Info] Start training from score -0.071580 [1] "[1]: train's binary_logloss:0.198597" [1] "[2]: train's binary_logloss:0.111535" [LightGBM] [Info] Number of positive: 3140, number of negative: 3373 [LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001201 seconds. You can set `force_row_wise=true` to remove the overhead. And if memory is not enough, you can set `force_col_wise=true`. [LightGBM] [Info] Total Bins 214 [LightGBM] [Info] Number of data points in the train set: 6513, number of used features: 107 [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.482113 -> initscore=-0.071580 [LightGBM] [Info] Start training from score -0.071580 [1] "[1]: train's binary_logloss:0.198597" [1] "[2]: train's binary_logloss:0.111535" [LightGBM] [Info] Number of positive: 3140, number of negative: 3373 [LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001202 seconds. You can set `force_row_wise=true` to remove the overhead. And if memory is not enough, you can set `force_col_wise=true`. [LightGBM] [Info] Total Bins 214 [LightGBM] [Info] Number of data points in the train set: 6513, number of used features: 107 [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.482113 -> initscore=-0.071580 [LightGBM] [Info] Start training from score -0.071580 [1] "[1]: train's binary_logloss:0.198597" [1] "[2]: train's binary_logloss:0.111535" [LightGBM] [Info] Number of positive: 3140, number of negative: 3373 [LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001215 seconds. You can set `force_row_wise=true` to remove the overhead. And if memory is not enough, you can set `force_col_wise=true`. [LightGBM] [Info] Total Bins 214 [LightGBM] [Info] Number of data points in the train set: 6513, number of used features: 107 [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.482113 -> initscore=-0.071580 [LightGBM] [Info] Start training from score -0.071580 [1] "[1]: train's binary_logloss:0.198597" [1] "[2]: train's binary_logloss:0.111535" [LightGBM] [Info] Number of positive: 3140, number of negative: 3373 [LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001208 seconds. You can set `force_row_wise=true` to remove the overhead. And if memory is not enough, you can set `force_col_wise=true`. [LightGBM] [Info] Total Bins 214 [LightGBM] [Info] Number of data points in the train set: 6513, number of used features: 107 [LightGBM] [Info] Number of positive: 3140, number of negative: 3373 [LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001249 seconds. You can set `force_row_wise=true` to remove the overhead. And if memory is not enough, you can set `force_col_wise=true`. [LightGBM] [Info] Total Bins 214 [LightGBM] [Info] Number of data points in the train set: 6513, number of used features: 107 [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.482113 -> initscore=-0.071580 [LightGBM] [Info] Start training from score -0.071580 [1] "[1]: train's binary_logloss:0.198597" [1] "[2]: train's binary_logloss:0.111535" [LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000344 seconds. You can set `force_row_wise=true` to remove the overhead. And if memory is not enough, you can set `force_col_wise=true`. [LightGBM] [Info] Total Bins 182 [LightGBM] [Info] Number of data points in the train set: 1611, number of used features: 91 [LightGBM] [Info] Number of positive: 3140, number of negative: 3373 [LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001219 seconds. You can set `force_row_wise=true` to remove the overhead. And if memory is not enough, you can set `force_col_wise=true`. [LightGBM] [Info] Total Bins 214 [LightGBM] [Info] Number of data points in the train set: 6513, number of used features: 107 [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.482113 -> initscore=-0.071580 [LightGBM] [Info] Start training from score -0.071580 [1] "[1]: train's binary_logloss:0.198597" [1] "[2]: train's binary_logloss:0.111535" [1] "[3]: train's binary_logloss:0.0480659" [1] "[4]: train's binary_logloss:0.0279151" [1] "[5]: train's binary_logloss:0.0190479" [LightGBM] [Info] Number of positive: 3140, number of negative: 3373 [LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001217 seconds. You can set `force_row_wise=true` to remove the overhead. And if memory is not enough, you can set `force_col_wise=true`. [LightGBM] [Info] Total Bins 214 [LightGBM] [Info] Number of data points in the train set: 6513, number of used features: 107 [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.482113 -> initscore=-0.071580 [LightGBM] [Info] Start training from score -0.071580 [1] "[1]: train's binary_logloss:0.198597" [1] "[2]: train's binary_logloss:0.111535" [LightGBM] [Info] Number of positive: 3140, number of negative: 3373 [LightGBM] [Info] Number of positive: 3140, number of negative: 3373 [LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001214 seconds. You can set `force_row_wise=true` to remove the overhead. And if memory is not enough, you can set `force_col_wise=true`. [LightGBM] [Info] Total Bins 214 [LightGBM] [Info] Number of data points in the train set: 6513, number of used features: 107 [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.482113 -> initscore=-0.071580 [LightGBM] [Info] Start training from score -0.071580 [1] "[1]: train's binary_logloss:0.198597" [1] "[2]: train's binary_logloss:0.111535" [1] "[3]: train's binary_logloss:0.0480659" [LightGBM] [Info] Number of positive: 3140, number of negative: 3373 [LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001214 seconds. You can set `force_row_wise=true` to remove the overhead. And if memory is not enough, you can set `force_col_wise=true`. [LightGBM] [Info] Total Bins 214 [LightGBM] [Info] Number of data points in the train set: 6513, number of used features: 107 [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.482113 -> initscore=-0.071580 [LightGBM] [Info] Start training from score -0.071580 [1] "[1]: train's binary_logloss:0.198597" [1] "[2]: train's binary_logloss:0.111535" [LightGBM] [Info] Number of positive: 3140, number of negative: 3373 [LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001221 seconds. You can set `force_row_wise=true` to remove the overhead. And if memory is not enough, you can set `force_col_wise=true`. [LightGBM] [Info] Total Bins 214 [LightGBM] [Info] Number of data points in the train set: 6513, number of used features: 107 [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.482113 -> initscore=-0.071580 [LightGBM] [Info] Start training from score -0.071580 [1] "[1]: train's binary_logloss:0.198597" [1] "[2]: train's binary_logloss:0.111535" ── Skip (test_lgb.Booster.R:445:5): Saving a model with unknown importance type Reason: Skipping this test because it causes issues for valgrind [LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.000019 seconds. You can set `force_col_wise=true` to remove the overhead. [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.000019 seconds. You can set `force_col_wise=true` to remove the overhead. [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.000017 seconds. You can set `force_col_wise=true` to remove the overhead. [LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.000014 seconds. You can set `force_col_wise=true` to remove the overhead. [LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.000014 seconds. You can set `force_col_wise=true` to remove the overhead. [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [LightGBM] [Info] Number of positive: 3140, number of negative: 3373 [LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001242 seconds. You can set `force_row_wise=true` to remove the overhead. And if memory is not enough, you can set `force_col_wise=true`. [LightGBM] [Info] Total Bins 232 [LightGBM] [Info] Number of data points in the train set: 6513, number of used features: 116 [LightGBM] [Warning] No further splits with positive gain, best gain: 0.000000 [LightGBM] [Warning] No further splits with positive gain, best gain: 0.000000 [LightGBM] [Warning] No further splits with positive gain, best gain: 0.000000 [LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.000021 seconds. You can set `force_col_wise=true` to remove the overhead. [LightGBM] [Info] Total Bins 77 [LightGBM] [Info] Number of data points in the train set: 90, number of used features: 4 [LightGBM] [Info] Start training from score -1.504077 [LightGBM] [Info] Start training from score -1.098612 [LightGBM] [Info] Start training from score -0.810930 [LightGBM] [Warning] No further splits with positive gain, best gain: 0.000000 [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [LightGBM] [Warning] No further splits with positive gain, best gain: 0.000000 [LightGBM] [Warning] No further splits with positive gain, best gain: 0.000000 [LightGBM] [Warning] No further splits with positive gain, best gain: 0.000000 [LightGBM] [Warning] No further splits with positive gain, best gain: 0.000000 [LightGBM] [Warning] No further splits with positive gain, best gain: 0.000000 [LightGBM] [Warning] No further splits with positive gain, best gain: 0.000000 [LightGBM] [Warning] No further splits with positive gain, best gain: 0.000000 [LightGBM] [Info] Number of positive: 3140, number of negative: 3373 [LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001242 seconds. You can set `force_row_wise=true` to remove the overhead. And if memory is not enough, you can set `force_col_wise=true`. [LightGBM] [Info] Total Bins 232 [LightGBM] [Info] Number of data points in the train set: 6513, number of used features: 116 [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.482113 -> initscore=-0.071580 [LightGBM] [Info] Start training from score -0.071580 [LightGBM] [Warning] No further splits with positive gain, best gain: 0.000000 [LightGBM] [Warning] No further splits with positive gain, best gain: 0.000000 [LightGBM] [Warning] No further splits with positive gain, best gain: 0.000000 [LightGBM] [Info] Number of positive: 3140, number of negative: 3373 [LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001249 seconds. You can set `force_row_wise=true` to remove the overhead. And if memory is not enough, you can set `force_col_wise=true`. [LightGBM] [Info] Total Bins 232 [LightGBM] [Info] Number of data points in the train set: 6513, number of used features: 116 [LightGBM] [Warning] No further splits with positive gain, best gain: 0.000000 [LightGBM] [Warning] No further splits with positive gain, best gain: 0.000000 [LightGBM] [Warning] No further splits with positive gain, best gain: 0.000000 [LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.000020 seconds. You can set `force_col_wise=true` to remove the overhead. [LightGBM] [Info] Total Bins 77 [LightGBM] [Info] Number of data points in the train set: 90, number of used features: 4 [LightGBM] [Info] Start training from score -1.504077 [LightGBM] [Info] Start training from score -1.098612 [LightGBM] [Info] Start training from score -0.810930 [LightGBM] [Warning] No further splits with positive gain, best gain: 0.000000 [LightGBM] [Warning] No further splits with positive gain, best gain: -Inf [LightGBM] [Warning] No further splits with positive gain, best gain: 0.000000 [LightGBM] [Warning] No further splits with positive gain, best gain: 0.000000 [LightGBM] [Warning] No further splits with positive gain, best gain: 0.000000 [LightGBM] [Warning] No further splits with positive gain, best gain: 0.000000 [LightGBM] [Warning] No further splits with positive gain, best gain: 0.000000 [LightGBM] [Warning] No further splits with positive gain, best gain: 0.000000 [LightGBM] [Warning] No further splits with positive gain, best gain: 0.000000 [LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001313 seconds. You can set `force_row_wise=true` to remove the overhead. And if memory is not enough, you can set `force_col_wise=true`. [LightGBM] [Info] Total Bins 232 [LightGBM] [Info] Number of data points in the train set: 6513, number of used features: 116 [LightGBM] [Info] Start training from score 0.482113 [LightGBM] [Warning] No further splits with positive gain, best gain: 0.000000 [LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001317 seconds. You can set `force_row_wise=true` to remove the overhead. And if memory is not enough, you can set `force_col_wise=true`. [LightGBM] [Info] Total Bins 232 [LightGBM] [Info] Number of data points in the train set: 6513, number of used features: 116 [LightGBM] [Info] Start training from score 0.482113 [LightGBM] [Warning] No further splits with positive gain, best gain: 0.000000 [LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001333 seconds. You can set `force_row_wise=true` to remove the overhead. And if memory is not enough, you can set `force_col_wise=true`. [LightGBM] [Info] Total Bins 232 [LightGBM] [Info] Number of data points in the train set: 6513, number of used features: 116 [LightGBM] [Info] Start training from score 0.482113 [LightGBM] [Warning] No further splits with positive gain, best gain: 0.000000 ── Skip (test_utils.R:70:5): lgb.last_error() correctly returns errors from the Reason: Skipping this test because it causes valgrind to think there is a memory leak, and needs to be rethought ── Skipped tests ────────────────────────────────────────────────────────────── ● Skipping this test because it causes issues for valgrind (1) ● Skipping this test because it causes valgrind to think there is a memory leak, and needs to be rethought (1) ● UTF-8 feature names are not fully supported in the R package (1) ══ testthat results ═══════════════════════════════════════════════════════════ FAILURE (test_learning_to_rank.R:49:5): learning-to-rank with lgb.train() works as expected FAILURE (test_learning_to_rank.R:50:5): learning-to-rank with lgb.train() works as expected FAILURE (test_learning_to_rank.R:125:5): learning-to-rank with lgb.cv() works as expected FAILURE (test_learning_to_rank.R:131:5): learning-to-rank with lgb.cv() works as expected [ FAIL 4 | WARN 0 | SKIP 3 | PASS 597 ] Error: Test failures Execution halted ```
R CMD CHECK results ```text * using log directory ‘/export/home/X7hzECR/lightgbm.Rcheck’ * using R version 4.0.3 (2020-10-10) * using platform: i386-pc-solaris2.10 (32-bit) * using session charset: UTF-8 * using option ‘--as-cran’ * checking for file ‘lightgbm/DESCRIPTION’ ... OK * checking extension type ... Package * this is package ‘lightgbm’ version ‘3.0.0.99’ * package encoding: UTF-8 * checking CRAN incoming feasibility ... NOTE Maintainer: ‘Guolin Ke ’ New submission Package was archived on CRAN Possibly mis-spelled words in DESCRIPTION: Guolin (26:52) Ke (26:48) al (26:62) et (26:59) CRAN repository db overrides: X-CRAN-Comment: Archived on 2020-10-02 for corrupting R's memory. See the valgrind report of out-of-bounds write. * checking package namespace information ... OK * checking package dependencies ... OK * checking if this is a source package ... OK * checking if there is a namespace ... OK * checking for executable files ... OK * checking for hidden files and directories ... OK * checking for portable file names ... OK * checking for sufficient/correct file permissions ... OK * checking whether package ‘lightgbm’ can be installed ... OK * checking installed package size ... OK * checking package directory ... OK * checking for future file timestamps ... OK * checking DESCRIPTION meta-information ... OK * checking top-level files ... WARNING Output from running autoreconf: /opt/csw/share/aclocal/gtk.m4:7: warning: underquoted definition of AM_PATH_GTK /opt/csw/share/aclocal/gtk.m4:7: run info Automake 'Extending aclocal' /opt/csw/share/aclocal/gtk.m4:7: or see https://www.gnu.org/software/automake/manual/automake.html#Extending-aclocal A complete check needs the 'checkbashisms' script. See section ‘Configure and cleanup’ in the ‘Writing R Extensions’ manual. Files ‘README.md’ or ‘NEWS.md’ cannot be checked without ‘pandoc’ being installed. * checking for left-over files ... OK * checking index information ... OK * checking package subdirectories ... OK * checking R files for non-ASCII characters ... OK * checking R files for syntax errors ... OK * checking whether the package can be loaded ... OK * checking whether the package can be loaded with stated dependencies ... OK * checking whether the package can be unloaded cleanly ... OK * checking whether the namespace can be loaded with stated dependencies ... OK * checking whether the namespace can be unloaded cleanly ... OK * checking loading without being on the library search path ... OK * checking use of S3 registration ... OK * checking dependencies in R code ... OK * checking S3 generic/method consistency ... OK * checking replacement functions ... OK * checking foreign function calls ... OK * checking R code for possible problems ... OK * checking Rd files ... OK * checking Rd metadata ... OK * checking Rd line widths ... OK * checking Rd cross-references ... OK * checking for missing documentation entries ... OK * checking for code/documentation mismatches ... OK * checking Rd \usage sections ... OK * checking Rd contents ... OK * checking for unstated dependencies in examples ... OK * checking contents of ‘data’ directory ... OK * checking data for non-ASCII characters ... OK * checking data for ASCII and uncompressed saves ... OK * checking line endings in shell scripts ... OK * checking line endings in C/C++/Fortran sources/headers ... OK * checking line endings in Makefiles ... OK * checking compilation flags in Makevars ... OK * checking for GNU extensions in Makefiles ... OK * checking for portable use of $(BLAS_LIBS) and $(LAPACK_LIBS) ... OK * checking use of PKG_*FLAGS in Makefiles ... OK * checking use of SHLIB_OPENMP_*FLAGS in Makefiles ... OK * checking pragmas in C/C++ headers and code ... OK * checking compilation flags used ... NOTE Compilation used the following non-portable flag(s): ‘-march=pentiumpro’ * checking compiled code ... OK * checking examples ... OK * checking examples with --run-donttest ... OK * checking for unstated dependencies in ‘tests’ ... OK * checking tests ... ERROR Running ‘testthat.R’ [11s/12s] Running the tests in ‘tests/testthat.R’ failed. Last 13 lines of output: ── Skipped tests ────────────────────────────────────────────────────────────── ● Skipping this test because it causes issues for valgrind (1) ● Skipping this test because it causes valgrind to think there is a memory leak, and needs to be rethought (1) ● UTF-8 feature names are not fully supported in the R package (1) ══ testthat results ═══════════════════════════════════════════════════════════ FAILURE (test_learning_to_rank.R:49:5): learning-to-rank with lgb.train() works as expected FAILURE (test_learning_to_rank.R:50:5): learning-to-rank with lgb.train() works as expected FAILURE (test_learning_to_rank.R:125:5): learning-to-rank with lgb.cv() works as expected FAILURE (test_learning_to_rank.R:131:5): learning-to-rank with lgb.cv() works as expected [ FAIL 4 | WARN 0 | SKIP 3 | PASS 597 ] Error: Test failures Execution halted * checking PDF version of manual ... OK * checking for non-standard things in the check directory ... OK * checking for detritus in the temp directory ... OK * DONE Status: 1 ERROR, 1 WARNING, 2 NOTEs ```

How to test this

in a shell

sh build-cran-package.sh

in R

result <- rhub::check(
    path = "lightgbm_3.0.0.99.tar.gz"
    , email = "jaylamb20@gmail.com"
    , check_args = c(
        "--as-cran"
    )
    , platform = c(
        "solaris-x86-patched"
        , "solaris-x86-patched-ods"
    )
    , env_vars = c(
        "R_COMPILE_AND_INSTALL_PACKAGES" = "always"
        , "_R_CHECK_FORCE_SUGGESTS_" = "true"
        , "_R_CHECK_CRAN_INCOMING_USE_ASPELL_" = "true"
    )
)
StrikerRUS commented 3 years ago

As a quick workaround with the aim to try upload 3.1.0 to CRAN we can just skip problematic files/tests/asserts. https://github.com/dmlc/xgboost/blob/cf4f019ed63b94a8ba15d854a44feeaeeec385b0/R-package/tests/testthat/test_basic.R#L14 https://github.com/dmlc/xgboost/blob/cf4f019ed63b94a8ba15d854a44feeaeeec385b0/R-package/tests/testthat/test_basic.R#L239-L240

guolinke commented 3 years ago

@jameslamb , does the solaris generated the different NDCG scores rather than other platforms ? it may be caused by the floating-point sum error accumulation, which may be different in different platforms. Can we show its values, and try smaller thresholds?

jameslamb commented 3 years ago

As a quick workaround with the aim to try upload 3.1.0 to CRAN we can just skip problematic files/tests/asserts.

Yes if we can't fix it quickly enough, that's probably fine.

Can we show its values, and try smaller thresholds?

Yeah I was thinking the same thing. I can change the tests so that the error messages show exact values.

jameslamb commented 3 years ago

Ok I was able to get better errors.

── FAILURE (test_learning_to_rank.R:144:5): learning-to-rank with lgb.cv() works
eval_results[["ndcg@2"]][["eval"]][[7L]] not equal to ndcg2_values[[7L]].
1/1 mismatches
[1] 0.653 - 0.663 == -0.00967

── FAILURE (test_learning_to_rank.R:160:5): learning-to-rank with lgb.cv() works
eval_results[["ndcg@3"]][["eval"]][[7L]] not equal to ndcg3_values[[7L]].
1/1 mismatches
[1] 0.647 - 0.648 == -0.00154

This is from the following test code: https://github.com/microsoft/LightGBM/blob/cf69591b68c68a78ea4064e6941b8270b8a00eef/R-package/tests/testthat/test_learning_to_rank.R#L53

jameslamb commented 3 years ago

I checked Windows builds...this issue doesn't show up on 32-bit Windows. So it's not a problem like "we have some lost precision on 32-bit systems". I think the issue really might be specific to Solaris.

StrikerRUS commented 3 years ago

this issue doesn't show up on 32-bit Windows.

Seems that it does.

https://github.com/microsoft/LightGBM/blob/c5d9d2436b92a686f76c1d77de2d22e766e0e599/R-package/tests/testthat/test_learning_to_rank.R#L58-L62

jameslamb commented 3 years ago

Thanks for updating. I think that comment was written before we discovered that the 32-bit Windows jobs were silently not running in CI: https://github.com/microsoft/LightGBM/pull/3588