matsuken92 / molecular

0 stars 0 forks source link

Fitting Result #3

Open matsuken92 opened 5 years ago

matsuken92 commented 5 years ago

v001_005 LB : -0.504

start loading...
train loaded.
test loaded.
train: (4658147, 448), test: (2505542, 448)
(4658147, 448) (2505542, 448) (4658147,)
mol_type: 1
groups: (84747,)
start fitting...
[500]   cv_agg's l1: 1.92203 + 0.00556322
start loading...
train loaded.
test loaded.
train: (4658147, 448), test: (2505542, 448)
(4658147, 448) (2505542, 448) (4658147,)
mol_type: 1
groups: (84747,)
start fitting...
[500]   cv_agg's l1: 1.92203 + 0.0055621
[1000]  cv_agg's l1: 1.77303 + 0.00494343
[1500]  cv_agg's l1: 1.70323 + 0.00446648
[2000]  cv_agg's l1: 1.66024 + 0.00490509
[2500]  cv_agg's l1: 1.63135 + 0.00454851
[3000]  cv_agg's l1: 1.61075 + 0.00379495
[3500]  cv_agg's l1: 1.59549 + 0.00354415
[4000]  cv_agg's l1: 1.58352 + 0.00347779
[4500]  cv_agg's l1: 1.57433 + 0.0033483
[5000]  cv_agg's l1: 1.5668 + 0.00295789
[5500]  cv_agg's l1: 1.56072 + 0.00314398
[6000]  cv_agg's l1: 1.55555 + 0.00311031
[6500]  cv_agg's l1: 1.55145 + 0.00316783
[7000]  cv_agg's l1: 1.54789 + 0.00313881
[7500]  cv_agg's l1: 1.54483 + 0.0031963
[8000]  cv_agg's l1: 1.54228 + 0.00320211
[8500]  cv_agg's l1: 1.54 + 0.00319525
[9000]  cv_agg's l1: 1.53815 + 0.00319906
[9500]  cv_agg's l1: 1.53662 + 0.00322345
[10000] cv_agg's l1: 1.5352 + 0.00320117
[10500] cv_agg's l1: 1.534 + 0.00314227
[11000] cv_agg's l1: 1.53295 + 0.00318353
[11500] cv_agg's l1: 1.53198 + 0.00313367
[12000] cv_agg's l1: 1.53115 + 0.0031667
[12500] cv_agg's l1: 1.53044 + 0.00316765
[13000] cv_agg's l1: 1.5298 + 0.00312255
[13500] cv_agg's l1: 1.52926 + 0.00314636
[14000] cv_agg's l1: 1.52875 + 0.00315583
[14500] cv_agg's l1: 1.52831 + 0.00315941
[15000] cv_agg's l1: 1.52792 + 0.00314329
[15500] cv_agg's l1: 1.52756 + 0.00313612
[16000] cv_agg's l1: 1.52727 + 0.00312674
[16500] cv_agg's l1: 1.52696 + 0.00312702
[17000] cv_agg's l1: 1.52669 + 0.00312154
[17500] cv_agg's l1: 1.52646 + 0.00311381
[18000] cv_agg's l1: 1.52624 + 0.00310595
[18500] cv_agg's l1: 1.52607 + 0.00310938
[19000] cv_agg's l1: 1.5259 + 0.00310978
[19500] cv_agg's l1: 1.52575 + 0.0031102
[20000] cv_agg's l1: 1.52561 + 0.00311685
[20500] cv_agg's l1: 1.52549 + 0.00312579
[21000] cv_agg's l1: 1.52538 + 0.00312646
[21500] cv_agg's l1: 1.52528 + 0.00312451
[22000] cv_agg's l1: 1.5252 + 0.00312526
[22500] cv_agg's l1: 1.52512 + 0.0031246
[23000] cv_agg's l1: 1.52504 + 0.00312844
[23500] cv_agg's l1: 1.52498 + 0.00312956
[24000] cv_agg's l1: 1.52492 + 0.00313408
[24500] cv_agg's l1: 1.52487 + 0.00313249
[25000] cv_agg's l1: 1.52482 + 0.00313076
[25500] cv_agg's l1: 1.52478 + 0.00312625
[26000] cv_agg's l1: 1.52473 + 0.00312501
[26500] cv_agg's l1: 1.5247 + 0.00312309
[27000] cv_agg's l1: 1.52466 + 0.0031246
[27500] cv_agg's l1: 1.52463 + 0.00312294
[28000] cv_agg's l1: 1.5246 + 0.00312347
[28500] cv_agg's l1: 1.52458 + 0.00312286
[29000] cv_agg's l1: 1.52455 + 0.00312445
[29500] cv_agg's l1: 1.52453 + 0.00312487
[30000] cv_agg's l1: 1.52451 + 0.0031225
finish fitting.
Traceback (most recent call last):
  File "train_v001_005.py", line 241, in <module>
    proxy = extraction_cb.boosters_proxy
  File "train_v001_005.py", line 111, in boosters_proxy
    self._assert_called_cb()
  File "train_v001_005.py", line 107, in _assert_called_cb
    raise RuntimeError('callback has not called yet')
RuntimeError: callback has not called yet
start loading...
train loaded.
test loaded.
train: (4658147, 448), test: (2505542, 448)
(4658147, 448) (2505542, 448) (4658147,)
mol_type: 2
groups: (28560,)
start fitting...
[500]   cv_agg's l1: 0.90471 + 0.0107628
[1000]  cv_agg's l1: 0.87292 + 0.0117888
[1500]  cv_agg's l1: 0.863256 + 0.0118691
[2000]  cv_agg's l1: 0.859587 + 0.0119305
[2500]  cv_agg's l1: 0.858236 + 0.0119803
[3000]  cv_agg's l1: 0.857624 + 0.0119358
[3500]  cv_agg's l1: 0.857381 + 0.0119412
[4000]  cv_agg's l1: 0.857276 + 0.0119352
[4500]  cv_agg's l1: 0.857243 + 0.0119299
[5000]  cv_agg's l1: 0.857227 + 0.0119295
[5500]  cv_agg's l1: 0.857217 + 0.0119297
[6000]  cv_agg's l1: 0.857211 + 0.0119302
[6500]  cv_agg's l1: 0.857207 + 0.0119304
[7000]  cv_agg's l1: 0.857205 + 0.0119311
[7500]  cv_agg's l1: 0.857201 + 0.0119313
finish fitting.
Traceback (most recent call last):
  File "train_v001_005.py", line 241, in <module>
    proxy = extraction_cb.boosters_proxy
  File "train_v001_005.py", line 111, in boosters_proxy
    self._assert_called_cb()
  File "train_v001_005.py", line 107, in _assert_called_cb
    raise RuntimeError('callback has not called yet')
RuntimeError: callback has not called yet
start loading...
train loaded.
test loaded.
train: (4658147, 448), test: (2505542, 448)
(4658147, 448) (2505542, 448) (4658147,)
mol_type: 3
groups: (85003,)
start fitting...
[500]   cv_agg's l1: 0.980183 + 0.00191534
[1000]  cv_agg's l1: 0.89416 + 0.00137211
[1500]  cv_agg's l1: 0.847612 + 0.00108302
[2000]  cv_agg's l1: 0.817313 + 0.000900473
[2500]  cv_agg's l1: 0.793895 + 0.00124192
[3000]  cv_agg's l1: 0.775711 + 0.00105494
[3500]  cv_agg's l1: 0.761246 + 0.000843976
[4000]  cv_agg's l1: 0.74897 + 0.000785753
[4500]  cv_agg's l1: 0.738812 + 0.000992327
[5000]  cv_agg's l1: 0.730016 + 0.000934236
[5500]  cv_agg's l1: 0.722204 + 0.000798588
[6000]  cv_agg's l1: 0.715355 + 0.000837452
[6500]  cv_agg's l1: 0.709199 + 0.000943208
[7000]  cv_agg's l1: 0.703676 + 0.00114647
[7500]  cv_agg's l1: 0.698675 + 0.00115195
[8000]  cv_agg's l1: 0.694235 + 0.0011719
[8500]  cv_agg's l1: 0.690182 + 0.00105427
[9000]  cv_agg's l1: 0.68636 + 0.00101022
[9500]  cv_agg's l1: 0.682935 + 0.00106258
[10000] cv_agg's l1: 0.67974 + 0.000971091
[10500] cv_agg's l1: 0.676754 + 0.000872842
[11000] cv_agg's l1: 0.674093 + 0.000877686
[11500] cv_agg's l1: 0.671589 + 0.000924221
[12000] cv_agg's l1: 0.669161 + 0.00086791
[12500] cv_agg's l1: 0.666923 + 0.000927197
[13000] cv_agg's l1: 0.664828 + 0.000969258
[13500] cv_agg's l1: 0.662811 + 0.000991517
[14000] cv_agg's l1: 0.660938 + 0.000944303
[14500] cv_agg's l1: 0.659165 + 0.000976129
[15000] cv_agg's l1: 0.657531 + 0.000988066
[15500] cv_agg's l1: 0.655962 + 0.000984678
[16000] cv_agg's l1: 0.654516 + 0.000976414
[16500] cv_agg's l1: 0.653118 + 0.000973631
[17000] cv_agg's l1: 0.651768 + 0.000947877
[17500] cv_agg's l1: 0.650506 + 0.000896307
[18000] cv_agg's l1: 0.6493 + 0.000885184
[18500] cv_agg's l1: 0.648142 + 0.000867388
[19000] cv_agg's l1: 0.647072 + 0.000874536
[19500] cv_agg's l1: 0.646042 + 0.000877989
[20000] cv_agg's l1: 0.645054 + 0.000840068
[20500] cv_agg's l1: 0.644119 + 0.000833014
[21000] cv_agg's l1: 0.643205 + 0.000830744
[21500] cv_agg's l1: 0.642304 + 0.000846068
[22000] cv_agg's l1: 0.641474 + 0.000831231
[22500] cv_agg's l1: 0.64068 + 0.000862746
[23000] cv_agg's l1: 0.639896 + 0.000853514
[23500] cv_agg's l1: 0.639147 + 0.000843153
[24000] cv_agg's l1: 0.638437 + 0.000834281
[24500] cv_agg's l1: 0.637721 + 0.000832146
[25000] cv_agg's l1: 0.637069 + 0.000816222
[25500] cv_agg's l1: 0.636432 + 0.000811494
[26000] cv_agg's l1: 0.635824 + 0.000821378
[26500] cv_agg's l1: 0.635246 + 0.000822836
[27000] cv_agg's l1: 0.634692 + 0.00082193
[27500] cv_agg's l1: 0.634157 + 0.000837206
[28000] cv_agg's l1: 0.633646 + 0.000853549
[28500] cv_agg's l1: 0.633155 + 0.000846181
[29000] cv_agg's l1: 0.632674 + 0.000845911
[29500] cv_agg's l1: 0.632173 + 0.000831542
[30000] cv_agg's l1: 0.631712 + 0.00083728
finish fitting.
Traceback (most recent call last):
  File "train_v001_005.py", line 241, in <module>
    proxy = extraction_cb.boosters_proxy
  File "train_v001_005.py", line 111, in boosters_proxy
    self._assert_called_cb()
  File "train_v001_005.py", line 107, in _assert_called_cb
    raise RuntimeError('callback has not called yet')
RuntimeError: callback has not called yet
matsuken92 commented 5 years ago

v001_006

start loading...
train loaded.
test loaded.
train: (4658147, 448), test: (2505542, 448)
(4658147, 448) (2505542, 448) (4658147,)
mol_type: 1
groups: (84747,)
start fitting...
[500]   cv_agg's l1: 1.92139 + 0.00461778
[1000]  cv_agg's l1: 1.77175 + 0.0038184
[1500]  cv_agg's l1: 1.70208 + 0.00351828
[2000]  cv_agg's l1: 1.65945 + 0.00313633
[2500]  cv_agg's l1: 1.63099 + 0.00330971
[3000]  cv_agg's l1: 1.61045 + 0.00286912
[3500]  cv_agg's l1: 1.59546 + 0.00276308
[4000]  cv_agg's l1: 1.58353 + 0.00274057
[4500]  cv_agg's l1: 1.57418 + 0.00277616
[5000]  cv_agg's l1: 1.56678 + 0.00279619
[5500]  cv_agg's l1: 1.56058 + 0.0026757
[6000]  cv_agg's l1: 1.55553 + 0.00277113
[6500]  cv_agg's l1: 1.5512 + 0.00266363
[7000]  cv_agg's l1: 1.54763 + 0.00269958
[7500]  cv_agg's l1: 1.54463 + 0.00255661
[8000]  cv_agg's l1: 1.542 + 0.00250338
[8500]  cv_agg's l1: 1.53983 + 0.00255834
[9000]  cv_agg's l1: 1.53793 + 0.00254071
[9500]  cv_agg's l1: 1.53636 + 0.00252104
[10000] cv_agg's l1: 1.53498 + 0.00254139
[10500] cv_agg's l1: 1.53381 + 0.00257815
[11000] cv_agg's l1: 1.53279 + 0.00255905
[11500] cv_agg's l1: 1.53186 + 0.00256618
[12000] cv_agg's l1: 1.53104 + 0.002553
[12500] cv_agg's l1: 1.53027 + 0.00255277
[13000] cv_agg's l1: 1.52962 + 0.00254373
[13500] cv_agg's l1: 1.52904 + 0.00254885
[14000] cv_agg's l1: 1.52855 + 0.00252145
[14500] cv_agg's l1: 1.52812 + 0.00252458
[15000] cv_agg's l1: 1.52773 + 0.00250496
[15500] cv_agg's l1: 1.52735 + 0.00251653
[16000] cv_agg's l1: 1.52703 + 0.00250408
[16500] cv_agg's l1: 1.52676 + 0.00249464
[17000] cv_agg's l1: 1.52651 + 0.00251356
[17500] cv_agg's l1: 1.52628 + 0.00251587
[18000] cv_agg's l1: 1.52606 + 0.0025239
[18500] cv_agg's l1: 1.52587 + 0.00251956
[19000] cv_agg's l1: 1.5257 + 0.00251638
[19500] cv_agg's l1: 1.52555 + 0.00251747
[20000] cv_agg's l1: 1.5254 + 0.00251318
[20500] cv_agg's l1: 1.5253 + 0.00250276
[21000] cv_agg's l1: 1.52519 + 0.00250504
[21500] cv_agg's l1: 1.5251 + 0.00250663
[22000] cv_agg's l1: 1.52501 + 0.00251096
[22500] cv_agg's l1: 1.52492 + 0.00250561
[23000] cv_agg's l1: 1.52486 + 0.00250144
[23500] cv_agg's l1: 1.52479 + 0.00250343
[24000] cv_agg's l1: 1.52473 + 0.00250636
[24500] cv_agg's l1: 1.52468 + 0.00250913
[25000] cv_agg's l1: 1.52463 + 0.00250184
[25500] cv_agg's l1: 1.52459 + 0.00250014
[26000] cv_agg's l1: 1.52455 + 0.00249767
[26500] cv_agg's l1: 1.52451 + 0.00249915
[27000] cv_agg's l1: 1.52447 + 0.00250087
[27500] cv_agg's l1: 1.52444 + 0.0025006
[28000] cv_agg's l1: 1.52442 + 0.00249951
[28500] cv_agg's l1: 1.52439 + 0.00249921
[29000] cv_agg's l1: 1.52437 + 0.00249956
[29500] cv_agg's l1: 1.52435 + 0.00250115
[30000] cv_agg's l1: 1.52433 + 0.0025007
       l1-mean  l1-stdv
29995  1.52433  0.00250
29996  1.52433  0.00250
29997  1.52433  0.00250
29998  1.52433  0.00250
29999  1.52433  0.00250
finish fitting.
create oof preds.
[     1     16     17 ... 709407 709408 709414]
[     6      7      9 ... 709388 709410 709412]
[     8     14     21 ... 709401 709405 709415]
[     2      3     11 ... 709402 709403 709404]
[     0      4      5 ... 709409 709411 709413]
mae on oof preds: 1.5243265692339145
Traceback (most recent call last):
  File "train_v001_006.py", line 272, in <module>
    sub['scalar_coupling_constant'] = y_pred_proba_avg
  File "/home/kenichi.matsui/anaconda3/envs/kaggle_3.6/lib/python3.6/site-packages/pandas/core/frame.py", line 3119, in __setitem__
    self._set_item(key, value)
  File "/home/kenichi.matsui/anaconda3/envs/kaggle_3.6/lib/python3.6/site-packages/pandas/core/frame.py", line 3194, in _set_item
    value = self._sanitize_column(key, value)
  File "/home/kenichi.matsui/anaconda3/envs/kaggle_3.6/lib/python3.6/site-packages/pandas/core/frame.py", line 3391, in _sanitize_column
    value = _sanitize_index(value, self.index, copy=False)
  File "/home/kenichi.matsui/anaconda3/envs/kaggle_3.6/lib/python3.6/site-packages/pandas/core/series.py", line 4001, in _sanitize_index
    raise ValueError('Length of values does not match length of ' 'index')
ValueError: Length of values does not match length of index
start loading...
train loaded.
test loaded.
train: (4658147, 448), test: (2505542, 448)
(4658147, 448) (2505542, 448) (4658147,)
mol_type: 2
groups: (28560,)
start fitting...
[500]   cv_agg's l1: 0.901537 + 0.00991957
[1000]  cv_agg's l1: 0.869324 + 0.00928769
[1500]  cv_agg's l1: 0.860058 + 0.00966572
[2000]  cv_agg's l1: 0.856402 + 0.00963033
[2500]  cv_agg's l1: 0.854871 + 0.00956063
[3000]  cv_agg's l1: 0.85424 + 0.00957672
[3500]  cv_agg's l1: 0.853991 + 0.00957592
[4000]  cv_agg's l1: 0.853893 + 0.00956834
[4500]  cv_agg's l1: 0.853862 + 0.00957168
[5000]  cv_agg's l1: 0.853845 + 0.00957272
[5500]  cv_agg's l1: 0.853838 + 0.00957407
[6000]  cv_agg's l1: 0.853832 + 0.00957441
[6500]  cv_agg's l1: 0.853829 + 0.00957417
[7000]  cv_agg's l1: 0.853825 + 0.00957429
[7500]  cv_agg's l1: 0.853822 + 0.00957513
[8000]  cv_agg's l1: 0.85382 + 0.00957546
[8500]  cv_agg's l1: 0.853818 + 0.00957419
[9000]  cv_agg's l1: 0.853816 + 0.00957516
[9500]  cv_agg's l1: 0.853815 + 0.00957564
[10000] cv_agg's l1: 0.853814 + 0.00957581
[10500] cv_agg's l1: 0.853813 + 0.00957564
[11000] cv_agg's l1: 0.853811 + 0.00957559
       l1-mean  l1-stdv
11206  0.85381  0.00958
11207  0.85381  0.00958
11208  0.85381  0.00958
11209  0.85381  0.00958
11210  0.85381  0.00958
finish fitting.
create oof preds.
[    1    14    27 ... 43360 43361 43362]
[    3     6     9 ... 43351 43352 43356]
[    0     2     8 ... 43354 43357 43359]
[   10    15    16 ... 43349 43350 43353]
[    4     5     7 ... 43348 43355 43358]
mae on oof preds: 0.8538110379257255
Traceback (most recent call last):
  File "train_v001_006.py", line 272, in <module>
    sub['scalar_coupling_constant'] = y_pred_proba_avg
  File "/home/kenichi.matsui/anaconda3/envs/kaggle_3.6/lib/python3.6/site-packages/pandas/core/frame.py", line 3119, in __setitem__
    self._set_item(key, value)
  File "/home/kenichi.matsui/anaconda3/envs/kaggle_3.6/lib/python3.6/site-packages/pandas/core/frame.py", line 3194, in _set_item
    value = self._sanitize_column(key, value)
  File "/home/kenichi.matsui/anaconda3/envs/kaggle_3.6/lib/python3.6/site-packages/pandas/core/frame.py", line 3391, in _sanitize_column
    value = _sanitize_index(value, self.index, copy=False)
  File "/home/kenichi.matsui/anaconda3/envs/kaggle_3.6/lib/python3.6/site-packages/pandas/core/series.py", line 4001, in _sanitize_index
    raise ValueError('Length of values does not match length of ' 'index')
ValueError: Length of values does not match length of index
start loading...
train loaded.
test loaded.
train: (4658147, 448), test: (2505542, 448)
(4658147, 448) (2505542, 448) (4658147,)
mol_type: 3
groups: (85003,)
start fitting...
[500]   cv_agg's l1: 0.98077 + 0.00191415
[1000]  cv_agg's l1: 0.894928 + 0.00177046
[1500]  cv_agg's l1: 0.847546 + 0.00118025
[2000]  cv_agg's l1: 0.816446 + 0.000861219
[2500]  cv_agg's l1: 0.793535 + 0.000852951
[3000]  cv_agg's l1: 0.775425 + 0.000664927
[3500]  cv_agg's l1: 0.761333 + 0.000687159
[4000]  cv_agg's l1: 0.749001 + 0.000731968
[4500]  cv_agg's l1: 0.73869 + 0.000959735
[5000]  cv_agg's l1: 0.729789 + 0.00101112
[5500]  cv_agg's l1: 0.722179 + 0.000998983
[6000]  cv_agg's l1: 0.715333 + 0.00100887
[6500]  cv_agg's l1: 0.709252 + 0.00101652
[7000]  cv_agg's l1: 0.703745 + 0.000954553
[7500]  cv_agg's l1: 0.698753 + 0.00105635
[8000]  cv_agg's l1: 0.694184 + 0.000981459
[8500]  cv_agg's l1: 0.690051 + 0.00104604
[9000]  cv_agg's l1: 0.68633 + 0.00101638
[9500]  cv_agg's l1: 0.682792 + 0.000984454
[10000] cv_agg's l1: 0.679617 + 0.000889432
[10500] cv_agg's l1: 0.676753 + 0.000885709
matsuken92 commented 5 years ago

v001_008 LB: -0.545

start loading...
train loaded.
test loaded.
train: (4658147, 448), test: (2505542, 448)
(4658147, 448) (2505542, 448) (4658147,)
mol_type: 1
groups: (84747,)
start fitting...
[500]   cv_agg's l1: 1.92017 + 0.00333796
[1000]  cv_agg's l1: 1.76255 + 0.00409135
[1500]  cv_agg's l1: 1.68695 + 0.0041532
[2000]  cv_agg's l1: 1.63967 + 0.00328814
[2500]  cv_agg's l1: 1.60709 + 0.00323562
[3000]  cv_agg's l1: 1.5841 + 0.00301814
[3500]  cv_agg's l1: 1.56615 + 0.00308258
[4000]  cv_agg's l1: 1.55197 + 0.0030887
[4500]  cv_agg's l1: 1.54074 + 0.00302081
[5000]  cv_agg's l1: 1.5318 + 0.00298532
[5500]  cv_agg's l1: 1.52431 + 0.00279738
[6000]  cv_agg's l1: 1.51799 + 0.00297266
[6500]  cv_agg's l1: 1.51255 + 0.00295734
[7000]  cv_agg's l1: 1.50818 + 0.00300976
[7500]  cv_agg's l1: 1.50428 + 0.00305129
[8000]  cv_agg's l1: 1.50106 + 0.00303033
[8500]  cv_agg's l1: 1.49829 + 0.00299036
[9000]  cv_agg's l1: 1.49583 + 0.00302374
[9500]  cv_agg's l1: 1.4937 + 0.0030926
[10000] cv_agg's l1: 1.49189 + 0.00314157
[10500] cv_agg's l1: 1.49028 + 0.00316945
[11000] cv_agg's l1: 1.48885 + 0.00327312
[11500] cv_agg's l1: 1.48754 + 0.00328743
[12000] cv_agg's l1: 1.48644 + 0.00332587
[12500] cv_agg's l1: 1.48547 + 0.00333812
[13000] cv_agg's l1: 1.48457 + 0.00331199
[13500] cv_agg's l1: 1.48377 + 0.00332461
[14000] cv_agg's l1: 1.48306 + 0.00333219
[14500] cv_agg's l1: 1.48243 + 0.00333103
[15000] cv_agg's l1: 1.48187 + 0.00336442
[15500] cv_agg's l1: 1.48137 + 0.00335763
[16000] cv_agg's l1: 1.48093 + 0.00335537
[16500] cv_agg's l1: 1.48052 + 0.0033785
[17000] cv_agg's l1: 1.48013 + 0.00338947
[17500] cv_agg's l1: 1.47981 + 0.0033785
[18000] cv_agg's l1: 1.47952 + 0.00337348
[18500] cv_agg's l1: 1.47924 + 0.00337125
[19000] cv_agg's l1: 1.47898 + 0.00338638
[19500] cv_agg's l1: 1.47876 + 0.00338668
[20000] cv_agg's l1: 1.47855 + 0.00338921
[20500] cv_agg's l1: 1.47838 + 0.00339039
[21000] cv_agg's l1: 1.47819 + 0.00338729
[21500] cv_agg's l1: 1.47803 + 0.003381
[22000] cv_agg's l1: 1.47788 + 0.00337847
[22500] cv_agg's l1: 1.47775 + 0.00338327
[23000] cv_agg's l1: 1.47762 + 0.00338197
[23500] cv_agg's l1: 1.47752 + 0.00338423
[24000] cv_agg's l1: 1.47741 + 0.00338231
[24500] cv_agg's l1: 1.47731 + 0.00338273
[25000] cv_agg's l1: 1.47723 + 0.00338338
[25500] cv_agg's l1: 1.47715 + 0.00338274
[26000] cv_agg's l1: 1.47709 + 0.00337902
[26500] cv_agg's l1: 1.47701 + 0.00338214
[27000] cv_agg's l1: 1.47695 + 0.00337714
[27500] cv_agg's l1: 1.4769 + 0.00338059
[28000] cv_agg's l1: 1.47685 + 0.00338256
[28500] cv_agg's l1: 1.4768 + 0.00338214
[29000] cv_agg's l1: 1.47676 + 0.00338276
[29500] cv_agg's l1: 1.47672 + 0.00337813
[30000] cv_agg's l1: 1.47669 + 0.00337895
       l1-mean  l1-stdv
29993  1.47669  0.00338
29994  1.47669  0.00338
29995  1.47669  0.00338
29996  1.47669  0.00338
29997  1.47669  0.00338
finish fitting.
create oof preds.
[     1     16     17 ... 709407 709408 709414]
[     6      7      9 ... 709388 709410 709412]
[     8     14     21 ... 709401 709405 709415]
[     2      3     11 ... 709402 709403 709404]
[     0      4      5 ... 709409 709411 709413]
mae on oof preds: 1.4766851801470764
finish.
start loading...
train loaded.
test loaded.
train: (4658147, 448), test: (2505542, 448)
(4658147, 448) (2505542, 448) (4658147,)
mol_type: 2
groups: (28560,)
start fitting...
[500]   cv_agg's l1: 0.87365 + 0.0129633
[1000]  cv_agg's l1: 0.833818 + 0.0121379
[1500]  cv_agg's l1: 0.821046 + 0.0120926
[2000]  cv_agg's l1: 0.815632 + 0.0115249
[2500]  cv_agg's l1: 0.813199 + 0.0115265
[3000]  cv_agg's l1: 0.812045 + 0.0114949
[3500]  cv_agg's l1: 0.811543 + 0.0114593
[4000]  cv_agg's l1: 0.811297 + 0.0114466
[4500]  cv_agg's l1: 0.811185 + 0.0114466
[5000]  cv_agg's l1: 0.811125 + 0.0114451
[5500]  cv_agg's l1: 0.811097 + 0.0114472
[6000]  cv_agg's l1: 0.811083 + 0.0114472
[6500]  cv_agg's l1: 0.811071 + 0.0114476
[7000]  cv_agg's l1: 0.811062 + 0.0114471
[7500]  cv_agg's l1: 0.811057 + 0.0114464
[8000]  cv_agg's l1: 0.811053 + 0.0114464
[8500]  cv_agg's l1: 0.81105 + 0.0114472
[9000]  cv_agg's l1: 0.811046 + 0.011447
[9500]  cv_agg's l1: 0.811044 + 0.0114467
[10000] cv_agg's l1: 0.811042 + 0.0114459
[10500] cv_agg's l1: 0.81104 + 0.0114459
[11000] cv_agg's l1: 0.811038 + 0.0114458
[11500] cv_agg's l1: 0.811037 + 0.0114454
[12000] cv_agg's l1: 0.811035 + 0.0114453
[12500] cv_agg's l1: 0.811035 + 0.0114451
[13000] cv_agg's l1: 0.811034 + 0.0114451
[13500] cv_agg's l1: 0.811032 + 0.0114448
[14000] cv_agg's l1: 0.811031 + 0.0114453
[14500] cv_agg's l1: 0.811031 + 0.0114453
[15000] cv_agg's l1: 0.81103 + 0.0114453
[15500] cv_agg's l1: 0.811029 + 0.0114453
[16000] cv_agg's l1: 0.811029 + 0.0114452
[16500] cv_agg's l1: 0.811028 + 0.0114453
[17000] cv_agg's l1: 0.811027 + 0.0114451
[17500] cv_agg's l1: 0.811027 + 0.0114449
[18000] cv_agg's l1: 0.811027 + 0.0114448
[18500] cv_agg's l1: 0.811026 + 0.0114447
[19000] cv_agg's l1: 0.811026 + 0.0114447
       l1-mean  l1-stdv
18780  0.81103  0.01144
18781  0.81103  0.01144
18782  0.81103  0.01144
18783  0.81103  0.01144
18784  0.81103  0.01144
finish fitting.
create oof preds.
[    1    14    27 ... 43360 43361 43362]
[    3     6     9 ... 43351 43352 43356]
[    0     2     8 ... 43354 43357 43359]
[   10    15    16 ... 43349 43350 43353]
[    4     5     7 ... 43348 43355 43358]
mae on oof preds: 0.8110261801402201
finish.
start loading...
train loaded.
test loaded.
train: (4658147, 448), test: (2505542, 448)
(4658147, 448) (2505542, 448) (4658147,)
mol_type: 3
groups: (85003,)
start fitting...
[500]   cv_agg's l1: 0.977538 + 0.00163317
[1000]  cv_agg's l1: 0.892196 + 0.00187349
[1500]  cv_agg's l1: 0.844873 + 0.00174977
[2000]  cv_agg's l1: 0.813238 + 0.00176432
[2500]  cv_agg's l1: 0.789613 + 0.0015845
[3000]  cv_agg's l1: 0.771433 + 0.00158251
[3500]  cv_agg's l1: 0.756496 + 0.0014235
[4000]  cv_agg's l1: 0.744167 + 0.00144777
[4500]  cv_agg's l1: 0.733556 + 0.00135285
[5000]  cv_agg's l1: 0.724422 + 0.00117878
[5500]  cv_agg's l1: 0.716434 + 0.00111635
[6000]  cv_agg's l1: 0.709187 + 0.001185
[6500]  cv_agg's l1: 0.702793 + 0.00119241
[7000]  cv_agg's l1: 0.696996 + 0.00112547
[7500]  cv_agg's l1: 0.691847 + 0.00120413
[8000]  cv_agg's l1: 0.687138 + 0.00108982
[8500]  cv_agg's l1: 0.682919 + 0.000954191
[9000]  cv_agg's l1: 0.67907 + 0.000936306
[9500]  cv_agg's l1: 0.675478 + 0.000907057
[10000] cv_agg's l1: 0.672209 + 0.000886008
[10500] cv_agg's l1: 0.669081 + 0.000976032
[11000] cv_agg's l1: 0.666199 + 0.000998252
[11500] cv_agg's l1: 0.663617 + 0.000952193
[12000] cv_agg's l1: 0.661135 + 0.00102767
[12500] cv_agg's l1: 0.658816 + 0.000997111
[13000] cv_agg's l1: 0.656618 + 0.000993705
[13500] cv_agg's l1: 0.654555 + 0.00102685
[14000] cv_agg's l1: 0.652625 + 0.000944461
[14500] cv_agg's l1: 0.650763 + 0.000923397
[15000] cv_agg's l1: 0.649016 + 0.000945005
[15500] cv_agg's l1: 0.647372 + 0.000941293
[16000] cv_agg's l1: 0.645786 + 0.000929265
[16500] cv_agg's l1: 0.644347 + 0.000919348
[17000] cv_agg's l1: 0.642919 + 0.000950035
[17500] cv_agg's l1: 0.641603 + 0.00096372
[18000] cv_agg's l1: 0.640306 + 0.00097845
[18500] cv_agg's l1: 0.639092 + 0.000972145
[19000] cv_agg's l1: 0.63791 + 0.000974535
[19500] cv_agg's l1: 0.636796 + 0.00102065
[20000] cv_agg's l1: 0.635738 + 0.00099518
[20500] cv_agg's l1: 0.634709 + 0.00100258
[21000] cv_agg's l1: 0.633735 + 0.00101068
[21500] cv_agg's l1: 0.632808 + 0.00100141
[22000] cv_agg's l1: 0.631919 + 0.00100152
[22500] cv_agg's l1: 0.631035 + 0.000974307
[23000] cv_agg's l1: 0.630215 + 0.000960518
[23500] cv_agg's l1: 0.62943 + 0.000970988
[24000] cv_agg's l1: 0.628647 + 0.000969273
[24500] cv_agg's l1: 0.627906 + 0.000968208
[25000] cv_agg's l1: 0.62718 + 0.000956734
[25500] cv_agg's l1: 0.62651 + 0.00095861
[26000] cv_agg's l1: 0.625859 + 0.000976467
[26500] cv_agg's l1: 0.625223 + 0.000949914
[27000] cv_agg's l1: 0.624605 + 0.000966883
[27500] cv_agg's l1: 0.624018 + 0.000964267
[28000] cv_agg's l1: 0.623434 + 0.000942568
[28500] cv_agg's l1: 0.622886 + 0.000941125
[29000] cv_agg's l1: 0.622352 + 0.000934674
[29500] cv_agg's l1: 0.621828 + 0.000956918
[30000] cv_agg's l1: 0.621336 + 0.000949163
       l1-mean  l1-stdv
29995  0.62134  0.00095
29996  0.62134  0.00095
29997  0.62134  0.00095
29998  0.62134  0.00095
29999  0.62134  0.00095
finish fitting.
create oof preds.
[      1       3       9 ... 3905356 3905366 3905367]
[      5       6       7 ... 3905353 3905355 3905359]
[     20      26      36 ... 3905350 3905352 3905360]
[      0      15      17 ... 3905333 3905339 3905357]
[      2       4      10 ... 3905363 3905364 3905365]
mae on oof preds: 0.6213359493451289
finish.
matsuken92 commented 5 years ago

v001_009 LB -0.569

start loading...
train loaded.
test loaded.
train: (4658147, 448), test: (2505542, 448)
(4658147, 448) (2505542, 448) (4658147,)
mol_type: 1
groups: (84747,)
start fitting...
[500]   cv_agg's l1: 1.91984 + 0.0030756
[1000]  cv_agg's l1: 1.76075 + 0.00211604
[1500]  cv_agg's l1: 1.68594 + 0.00285243
[2000]  cv_agg's l1: 1.6394 + 0.0029538
[2500]  cv_agg's l1: 1.60718 + 0.00271196
[3000]  cv_agg's l1: 1.58398 + 0.00293497
[3500]  cv_agg's l1: 1.5659 + 0.00265335
[4000]  cv_agg's l1: 1.55174 + 0.00235089
[4500]  cv_agg's l1: 1.54073 + 0.00258796
[5000]  cv_agg's l1: 1.53172 + 0.00273397
[5500]  cv_agg's l1: 1.5243 + 0.00283452
[6000]  cv_agg's l1: 1.51815 + 0.00269442
[6500]  cv_agg's l1: 1.51286 + 0.00276292
[7000]  cv_agg's l1: 1.50839 + 0.00273656
[7500]  cv_agg's l1: 1.50446 + 0.0026531
[8000]  cv_agg's l1: 1.50131 + 0.00266764
[8500]  cv_agg's l1: 1.49849 + 0.00274222
[9000]  cv_agg's l1: 1.49605 + 0.00282652
[9500]  cv_agg's l1: 1.49392 + 0.00288573
[10000] cv_agg's l1: 1.49203 + 0.00283749
[10500] cv_agg's l1: 1.49039 + 0.00279305
[11000] cv_agg's l1: 1.48891 + 0.00279425
[11500] cv_agg's l1: 1.48762 + 0.00276788
[12000] cv_agg's l1: 1.4865 + 0.0027777
[12500] cv_agg's l1: 1.48543 + 0.00281328
[13000] cv_agg's l1: 1.4845 + 0.00282669
[13500] cv_agg's l1: 1.4837 + 0.00283615
[14000] cv_agg's l1: 1.48302 + 0.00281265
[14500] cv_agg's l1: 1.48239 + 0.00284157
[15000] cv_agg's l1: 1.48183 + 0.00285751
[15500] cv_agg's l1: 1.48128 + 0.00287238
[16000] cv_agg's l1: 1.48084 + 0.0028893
[16500] cv_agg's l1: 1.48043 + 0.00288374
[17000] cv_agg's l1: 1.48005 + 0.00289883
[17500] cv_agg's l1: 1.47971 + 0.00289941
[18000] cv_agg's l1: 1.4794 + 0.00289303
[18500] cv_agg's l1: 1.47912 + 0.00288548
[19000] cv_agg's l1: 1.47887 + 0.00288596
[19500] cv_agg's l1: 1.47864 + 0.00290738
[20000] cv_agg's l1: 1.47843 + 0.00290283
[20500] cv_agg's l1: 1.47823 + 0.00290995
[21000] cv_agg's l1: 1.47806 + 0.002905
[21500] cv_agg's l1: 1.4779 + 0.00290037
[22000] cv_agg's l1: 1.47776 + 0.00290618
[22500] cv_agg's l1: 1.47763 + 0.00290808
[23000] cv_agg's l1: 1.47751 + 0.00291056
[23500] cv_agg's l1: 1.47741 + 0.00291348
[24000] cv_agg's l1: 1.47731 + 0.00290939
[24500] cv_agg's l1: 1.47722 + 0.0029126
[25000] cv_agg's l1: 1.47714 + 0.00291844
[25500] cv_agg's l1: 1.47706 + 0.00292339
[26000] cv_agg's l1: 1.47699 + 0.00292517
[26500] cv_agg's l1: 1.47693 + 0.00292301
[27000] cv_agg's l1: 1.47687 + 0.00292594
[27500] cv_agg's l1: 1.47681 + 0.00292624
[28000] cv_agg's l1: 1.47676 + 0.00292635
[28500] cv_agg's l1: 1.47671 + 0.00292331
[29000] cv_agg's l1: 1.47667 + 0.00292547
[29500] cv_agg's l1: 1.47663 + 0.00292386
[30000] cv_agg's l1: 1.47659 + 0.00292571
       l1-mean  l1-stdv
29994  1.47659  0.00293
29995  1.47659  0.00293
29996  1.47659  0.00293
29997  1.47659  0.00293
29998  1.47659  0.00293
finish fitting.
best_iteration: 29999
create oof preds.
[     1     16     17 ... 709407 709408 709414]
[     6      7      9 ... 709388 709410 709412]
[     8     14     21 ... 709401 709405 709415]
[     2      3     11 ... 709402 709403 709404]
[     0      4      5 ... 709409 709411 709413]
mae on oof preds: 1.4765915806333614
finish.
start loading...
train loaded.
test loaded.
train: (4658147, 448), test: (2505542, 448)
(4658147, 448) (2505542, 448) (4658147,)
mol_type: 2
groups: (28560,)
start fitting...
[500]   cv_agg's l1: 0.87365 + 0.0129633
[1000]  cv_agg's l1: 0.833818 + 0.0121379
[1500]  cv_agg's l1: 0.821046 + 0.0120926
[2000]  cv_agg's l1: 0.815632 + 0.0115249
[2500]  cv_agg's l1: 0.813199 + 0.0115265
[3000]  cv_agg's l1: 0.812045 + 0.0114949
[3500]  cv_agg's l1: 0.811543 + 0.0114593
[4000]  cv_agg's l1: 0.811297 + 0.0114466
[4500]  cv_agg's l1: 0.811185 + 0.0114466
[5000]  cv_agg's l1: 0.811125 + 0.0114451
[5500]  cv_agg's l1: 0.811097 + 0.0114472
[6000]  cv_agg's l1: 0.811083 + 0.0114472
[6500]  cv_agg's l1: 0.811071 + 0.0114476
[7000]  cv_agg's l1: 0.811062 + 0.0114471
[7500]  cv_agg's l1: 0.811057 + 0.0114464
[8000]  cv_agg's l1: 0.811053 + 0.0114464
[8500]  cv_agg's l1: 0.81105 + 0.0114472
[9000]  cv_agg's l1: 0.811046 + 0.011447
[9500]  cv_agg's l1: 0.811044 + 0.0114467
[10000] cv_agg's l1: 0.811042 + 0.0114459
[10500] cv_agg's l1: 0.81104 + 0.0114459
[11000] cv_agg's l1: 0.811038 + 0.0114458
[11500] cv_agg's l1: 0.811037 + 0.0114454
[12000] cv_agg's l1: 0.811035 + 0.0114453
[12500] cv_agg's l1: 0.811035 + 0.0114451
[13000] cv_agg's l1: 0.811034 + 0.0114451
[13500] cv_agg's l1: 0.811032 + 0.0114448
[14000] cv_agg's l1: 0.811031 + 0.0114453
[14500] cv_agg's l1: 0.811031 + 0.0114453
[15000] cv_agg's l1: 0.81103 + 0.0114453
[15500] cv_agg's l1: 0.811029 + 0.0114453
[16000] cv_agg's l1: 0.811029 + 0.0114452
[16500] cv_agg's l1: 0.811028 + 0.0114453
[17000] cv_agg's l1: 0.811027 + 0.0114451
[17500] cv_agg's l1: 0.811027 + 0.0114449
[18000] cv_agg's l1: 0.811027 + 0.0114448
[18500] cv_agg's l1: 0.811026 + 0.0114447
[19000] cv_agg's l1: 0.811026 + 0.0114447
       l1-mean  l1-stdv
18780  0.81103  0.01144
18781  0.81103  0.01144
18782  0.81103  0.01144
18783  0.81103  0.01144
18784  0.81103  0.01144
finish fitting.
best_iteration: 18785
create oof preds.
[    1    14    27 ... 43360 43361 43362]
[    3     6     9 ... 43351 43352 43356]
[    0     2     8 ... 43354 43357 43359]
[   10    15    16 ... 43349 43350 43353]
[    4     5     7 ... 43348 43355 43358]
mae on oof preds: 0.8110261801402201
finish.
start loading...
train loaded.
test loaded.
train: (4658147, 448), test: (2505542, 448)
(4658147, 448) (2505542, 448) (4658147,)
mol_type: 3
groups: (81254,)
start fitting...
[500]   cv_agg's l1: 0.41554 + 0.00236676
[1000]  cv_agg's l1: 0.386701 + 0.00211775
[1500]  cv_agg's l1: 0.373462 + 0.00191575
[2000]  cv_agg's l1: 0.365862 + 0.00173314
[2500]  cv_agg's l1: 0.361009 + 0.00177746
[3000]  cv_agg's l1: 0.357826 + 0.00175682
[3500]  cv_agg's l1: 0.355493 + 0.00170238
[4000]  cv_agg's l1: 0.353847 + 0.00167308
[4500]  cv_agg's l1: 0.352575 + 0.00166999
[5000]  cv_agg's l1: 0.351652 + 0.00164093
[5500]  cv_agg's l1: 0.350909 + 0.00163097
[6000]  cv_agg's l1: 0.35033 + 0.00162002
[6500]  cv_agg's l1: 0.349905 + 0.00161183
[7000]  cv_agg's l1: 0.349549 + 0.00161673
[7500]  cv_agg's l1: 0.349263 + 0.00160759
[8000]  cv_agg's l1: 0.34902 + 0.00158934
[8500]  cv_agg's l1: 0.348828 + 0.00159108
[9000]  cv_agg's l1: 0.348668 + 0.00158271
[9500]  cv_agg's l1: 0.348542 + 0.00157724
[10000] cv_agg's l1: 0.348427 + 0.00157751
[10500] cv_agg's l1: 0.348337 + 0.00157625
[11000] cv_agg's l1: 0.348265 + 0.00157749
[11500] cv_agg's l1: 0.348201 + 0.00157583
[12000] cv_agg's l1: 0.348147 + 0.0015759
[12500] cv_agg's l1: 0.348101 + 0.00157695
[13000] cv_agg's l1: 0.348064 + 0.00157647
[13500] cv_agg's l1: 0.348033 + 0.00157874
[14000] cv_agg's l1: 0.348005 + 0.00157862
[14500] cv_agg's l1: 0.347981 + 0.00157796
[15000] cv_agg's l1: 0.34796 + 0.00157663
[15500] cv_agg's l1: 0.347942 + 0.00157681
[16000] cv_agg's l1: 0.347928 + 0.00157793
[16500] cv_agg's l1: 0.347914 + 0.00157854
[17000] cv_agg's l1: 0.347902 + 0.00157818
[17500] cv_agg's l1: 0.347892 + 0.00157886
[18000] cv_agg's l1: 0.347882 + 0.00157983
[18500] cv_agg's l1: 0.347873 + 0.00157953
[19000] cv_agg's l1: 0.347864 + 0.00157956
[19500] cv_agg's l1: 0.347857 + 0.0015793
[20000] cv_agg's l1: 0.34785 + 0.00157962
[20500] cv_agg's l1: 0.347844 + 0.00157958
[21000] cv_agg's l1: 0.347839 + 0.00157957
[21500] cv_agg's l1: 0.347834 + 0.00157932
[22000] cv_agg's l1: 0.34783 + 0.00157913
[22500] cv_agg's l1: 0.347826 + 0.00157929
[23000] cv_agg's l1: 0.347822 + 0.00157942
[23500] cv_agg's l1: 0.347819 + 0.00157924
[24000] cv_agg's l1: 0.347816 + 0.00157927
[24500] cv_agg's l1: 0.347813 + 0.00157926
[25000] cv_agg's l1: 0.347811 + 0.0015792
[25500] cv_agg's l1: 0.347809 + 0.00157874
[26000] cv_agg's l1: 0.347807 + 0.00157851
[26500] cv_agg's l1: 0.347805 + 0.0015784
[27000] cv_agg's l1: 0.347803 + 0.00157846
[27500] cv_agg's l1: 0.347802 + 0.00157848
[28000] cv_agg's l1: 0.347801 + 0.00157826
[28500] cv_agg's l1: 0.347799 + 0.00157818
[29000] cv_agg's l1: 0.347798 + 0.00157824
[29500] cv_agg's l1: 0.347797 + 0.00157826
[30000] cv_agg's l1: 0.347796 + 0.00157823
       l1-mean  l1-stdv
29995  0.34780  0.00158
29996  0.34780  0.00158
29997  0.34780  0.00158
29998  0.34780  0.00158
29999  0.34780  0.00158
finish fitting.
best_iteration: 30000
create oof preds.
[     7      9     16 ... 378023 378024 378035]
[     2      6     11 ... 378027 378029 378031]
[     1      8     12 ... 378025 378030 378034]
[    13     20     27 ... 378026 378028 378033]
[     0      3      4 ... 378007 378021 378032]
mae on oof preds: 0.3477962911825021
finish.
start loading...
train loaded.
test loaded.
train: (4658147, 448), test: (2505542, 448)
(4658147, 448) (2505542, 448) (4658147,)
mol_type: 4
groups: (85000,)
start fitting...
[500]   cv_agg's l1: 0.996854 + 0.00074816
[1000]  cv_agg's l1: 0.911457 + 0.00177197
[1500]  cv_agg's l1: 0.864216 + 0.00141488
[2000]  cv_agg's l1: 0.832301 + 0.00129901
[2500]  cv_agg's l1: 0.808613 + 0.00154408
[3000]  cv_agg's l1: 0.790231 + 0.00122321
[3500]  cv_agg's l1: 0.774883 + 0.00118263
[4000]  cv_agg's l1: 0.762359 + 0.000921593
[4500]  cv_agg's l1: 0.751644 + 0.000888201
[5000]  cv_agg's l1: 0.742318 + 0.0010289
[5500]  cv_agg's l1: 0.733868 + 0.000886406
[6000]  cv_agg's l1: 0.726801 + 0.000826745
[6500]  cv_agg's l1: 0.72041 + 0.000851815
[7000]  cv_agg's l1: 0.714884 + 0.000831081
[7500]  cv_agg's l1: 0.709786 + 0.000939634
[8000]  cv_agg's l1: 0.705102 + 0.000893387
[8500]  cv_agg's l1: 0.70078 + 0.000792558
[9000]  cv_agg's l1: 0.696913 + 0.00075728
[9500]  cv_agg's l1: 0.693405 + 0.000774434
[10000] cv_agg's l1: 0.690127 + 0.000749425
[10500] cv_agg's l1: 0.687047 + 0.000762401
[11000] cv_agg's l1: 0.68419 + 0.000792661
[11500] cv_agg's l1: 0.681556 + 0.000878354
[12000] cv_agg's l1: 0.679118 + 0.000867516
[12500] cv_agg's l1: 0.676834 + 0.000851378
[13000] cv_agg's l1: 0.674678 + 0.000883557
[13500] cv_agg's l1: 0.672579 + 0.000881197
[14000] cv_agg's l1: 0.670681 + 0.000884286
[14500] cv_agg's l1: 0.668889 + 0.000878068
[15000] cv_agg's l1: 0.667233 + 0.00085422
[15500] cv_agg's l1: 0.665639 + 0.000831482
[16000] cv_agg's l1: 0.664116 + 0.000838503
[16500] cv_agg's l1: 0.662677 + 0.000812414
[17000] cv_agg's l1: 0.661306 + 0.000828537
[17500] cv_agg's l1: 0.660011 + 0.000805412
[18000] cv_agg's l1: 0.658746 + 0.000819477
[18500] cv_agg's l1: 0.657593 + 0.000799368
[19000] cv_agg's l1: 0.65645 + 0.000766297
[19500] cv_agg's l1: 0.655369 + 0.000752053
[20000] cv_agg's l1: 0.654349 + 0.000740855
[20500] cv_agg's l1: 0.653346 + 0.000721879
[21000] cv_agg's l1: 0.652431 + 0.000715033
[21500] cv_agg's l1: 0.651524 + 0.000715302
[22000] cv_agg's l1: 0.650668 + 0.000711285
[22500] cv_agg's l1: 0.649837 + 0.000696031
[23000] cv_agg's l1: 0.649031 + 0.000715716
[23500] cv_agg's l1: 0.648281 + 0.000707255
[24000] cv_agg's l1: 0.647557 + 0.000724787
[24500] cv_agg's l1: 0.646858 + 0.000717069
[25000] cv_agg's l1: 0.646199 + 0.000724936
[25500] cv_agg's l1: 0.645557 + 0.000722492
[26000] cv_agg's l1: 0.644945 + 0.000720828
[26500] cv_agg's l1: 0.644349 + 0.000728324
[27000] cv_agg's l1: 0.643775 + 0.000736916
[27500] cv_agg's l1: 0.643223 + 0.000707783
[28000] cv_agg's l1: 0.642672 + 0.000726625
[28500] cv_agg's l1: 0.642146 + 0.000715752
[29000] cv_agg's l1: 0.641639 + 0.000715303
[29500] cv_agg's l1: 0.641148 + 0.000698745
[30000] cv_agg's l1: 0.64068 + 0.00070227
       l1-mean  l1-stdv
29995  0.64068  0.00070
29996  0.64068  0.00070
29997  0.64068  0.00070
29998  0.64068  0.00070
29999  0.64068  0.00070
finish fitting.
best_iteration: 30000
create oof preds.
[      1       3       5 ... 3527319 3527330 3527331]
[      7       8      18 ... 3527322 3527325 3527326]
[      9      20      26 ... 3527316 3527317 3527321]
[      0      15      17 ... 3527301 3527306 3527323]
[      2       4      10 ... 3527327 3527328 3527329]
mae on oof preds: 0.6406804847814713
finish.
start loading...
train loaded.
test loaded.
train: (4658147, 448), test: (2505542, 448)
(4658147, 448) (2505542, 448) (4658147,)
mol_type: 4
groups: (85000,)
start fitting...
[500]   cv_agg's l1: 0.997279 + 0.00200099
[1000]  cv_agg's l1: 0.911374 + 0.00143558
[1500]  cv_agg's l1: 0.86418 + 0.00141688
[2000]  cv_agg's l1: 0.832591 + 0.00107039
[2500]  cv_agg's l1: 0.808831 + 0.00124083
[3000]  cv_agg's l1: 0.790362 + 0.00107279
[3500]  cv_agg's l1: 0.775192 + 0.000940945
[4000]  cv_agg's l1: 0.762836 + 0.0010036
[4500]  cv_agg's l1: 0.752053 + 0.00107308
[5000]  cv_agg's l1: 0.742647 + 0.00115828
[5500]  cv_agg's l1: 0.734442 + 0.00116912
[6000]  cv_agg's l1: 0.727326 + 0.00119985
[6500]  cv_agg's l1: 0.720971 + 0.00105774
[7000]  cv_agg's l1: 0.71537 + 0.00102799
[7500]  cv_agg's l1: 0.710202 + 0.0010425
[8000]  cv_agg's l1: 0.705512 + 0.0010025
[8500]  cv_agg's l1: 0.701361 + 0.000952066
[9000]  cv_agg's l1: 0.697482 + 0.000928573
[9500]  cv_agg's l1: 0.69388 + 0.000899352
[10000] cv_agg's l1: 0.69045 + 0.000810715
[10500] cv_agg's l1: 0.68746 + 0.000844282
[11000] cv_agg's l1: 0.68459 + 0.000847853
[11500] cv_agg's l1: 0.681959 + 0.000838167
[12000] cv_agg's l1: 0.679524 + 0.000775544
[12500] cv_agg's l1: 0.677218 + 0.000819342
[13000] cv_agg's l1: 0.675013 + 0.000853351
[13500] cv_agg's l1: 0.67294 + 0.000880949
[14000] cv_agg's l1: 0.671015 + 0.000862298
[14500] cv_agg's l1: 0.669221 + 0.0008403
[15000] cv_agg's l1: 0.667553 + 0.000830839
[15500] cv_agg's l1: 0.665977 + 0.000845721
[16000] cv_agg's l1: 0.664452 + 0.000818831
[16500] cv_agg's l1: 0.66303 + 0.000842535
[17000] cv_agg's l1: 0.661618 + 0.000823446
[17500] cv_agg's l1: 0.660315 + 0.000779724
[18000] cv_agg's l1: 0.659079 + 0.00076945
[18500] cv_agg's l1: 0.657896 + 0.000746919
[19000] cv_agg's l1: 0.65677 + 0.000740951
[19500] cv_agg's l1: 0.655715 + 0.000725016
[20000] cv_agg's l1: 0.654679 + 0.000723362
[20500] cv_agg's l1: 0.653691 + 0.00070587
[21000] cv_agg's l1: 0.652748 + 0.000701157
[21500] cv_agg's l1: 0.65183 + 0.000706755
[22000] cv_agg's l1: 0.650966 + 0.000715266
[22500] cv_agg's l1: 0.650132 + 0.000744872
[23000] cv_agg's l1: 0.649359 + 0.000743584
[23500] cv_agg's l1: 0.648606 + 0.000735655
[24000] cv_agg's l1: 0.647864 + 0.000745481
[24500] cv_agg's l1: 0.647168 + 0.000741688
[25000] cv_agg's l1: 0.646497 + 0.000740463
[25500] cv_agg's l1: 0.645826 + 0.000748106
[26000] cv_agg's l1: 0.645202 + 0.000746778
[26500] cv_agg's l1: 0.644614 + 0.000732367
[27000] cv_agg's l1: 0.64403 + 0.00074198
[27500] cv_agg's l1: 0.64347 + 0.000748528
[28000] cv_agg's l1: 0.642951 + 0.000745074
[28500] cv_agg's l1: 0.642432 + 0.000740615
[29000] cv_agg's l1: 0.641927 + 0.000747437
[29500] cv_agg's l1: 0.641438 + 0.000766494
[30000] cv_agg's l1: 0.640971 + 0.000775173
       l1-mean  l1-stdv
29995  0.64097  0.00078
29996  0.64097  0.00078
29997  0.64097  0.00078
29998  0.64097  0.00078
29999  0.64097  0.00078
finish fitting.
best_iteration: 30000
create oof preds.
[      1       3       5 ... 3527319 3527330 3527331]
[      7       8      18 ... 3527322 3527325 3527326]
[      9      20      26 ... 3527316 3527317 3527321]
[      0      15      17 ... 3527301 3527306 3527323]
[      2       4      10 ... 3527327 3527328 3527329]
mae on oof preds: 0.6409712700353298
finish.
matsuken92 commented 5 years ago

kernel

Training of type 0
Fold 1 started at Sun Jun  9 14:08:18 2019
Training until validation scores don't improve for 200 rounds.
[500]   training's l1: 1.3439   valid_1's l1: 1.56969
[1000]  training's l1: 1.1373   valid_1's l1: 1.5436
[1500]  training's l1: 0.972497 valid_1's l1: 1.52225
[2000]  training's l1: 0.840186 valid_1's l1: 1.50481
[2500]  training's l1: 0.731405 valid_1's l1: 1.49131
[3000]  training's l1: 0.639926 valid_1's l1: 1.48156
[3500]  training's l1: 0.563047 valid_1's l1: 1.47215
[4000]  training's l1: 0.497852 valid_1's l1: 1.4652
[4500]  training's l1: 0.442307 valid_1's l1: 1.45987
[5000]  training's l1: 0.393986 valid_1's l1: 1.45527
[5500]  training's l1: 0.351996 valid_1's l1: 1.45104
[6000]  training's l1: 0.315039 valid_1's l1: 1.44829
[6500]  training's l1: 0.282614 valid_1's l1: 1.44573
[7000]  training's l1: 0.254175 valid_1's l1: 1.44337
[7500]  training's l1: 0.228931 valid_1's l1: 1.44134
[8000]  training's l1: 0.206604 valid_1's l1: 1.43969
[8500]  training's l1: 0.186769 valid_1's l1: 1.43834
[9000]  training's l1: 0.169012 valid_1's l1: 1.43705
[9500]  training's l1: 0.153214 valid_1's l1: 1.43579
[10000] training's l1: 0.138953 valid_1's l1: 1.43497
Did not meet early stopping. Best iteration is:
[10000] training's l1: 0.138953 valid_1's l1: 1.43497
Fold 2 started at Sun Jun  9 14:13:36 2019
Training until validation scores don't improve for 200 rounds.
[500]   training's l1: 1.34783  valid_1's l1: 1.56899
[1000]  training's l1: 1.13826  valid_1's l1: 1.54313
[1500]  training's l1: 0.973116 valid_1's l1: 1.52435
[2000]  training's l1: 0.839804 valid_1's l1: 1.50788
[2500]  training's l1: 0.730667 valid_1's l1: 1.4949
[3000]  training's l1: 0.639073 valid_1's l1: 1.48331
[3500]  training's l1: 0.561765 valid_1's l1: 1.47466
[4000]  training's l1: 0.496963 valid_1's l1: 1.46806
[4500]  training's l1: 0.441167 valid_1's l1: 1.46258
[5000]  training's l1: 0.392971 valid_1's l1: 1.45787
[5500]  training's l1: 0.350912 valid_1's l1: 1.4537
[6000]  training's l1: 0.314099 valid_1's l1: 1.45061
[6500]  training's l1: 0.281897 valid_1's l1: 1.44774
[7000]  training's l1: 0.253506 valid_1's l1: 1.44561
[7500]  training's l1: 0.228476 valid_1's l1: 1.44366
[8000]  training's l1: 0.206226 valid_1's l1: 1.44226
[8500]  training's l1: 0.186411 valid_1's l1: 1.44076
[9000]  training's l1: 0.168668 valid_1's l1: 1.43964
[9500]  training's l1: 0.152898 valid_1's l1: 1.4386
[10000] training's l1: 0.138754 valid_1's l1: 1.43781
Did not meet early stopping. Best iteration is:
[10000] training's l1: 0.138754 valid_1's l1: 1.43781
Fold 3 started at Sun Jun  9 14:19:06 2019
Training until validation scores don't improve for 200 rounds.
[500]   training's l1: 1.33523  valid_1's l1: 1.57543
[1000]  training's l1: 1.13008  valid_1's l1: 1.55084
[1500]  training's l1: 0.966156 valid_1's l1: 1.52938
[2000]  training's l1: 0.833167 valid_1's l1: 1.51276
[2500]  training's l1: 0.724919 valid_1's l1: 1.49787
[3000]  training's l1: 0.634408 valid_1's l1: 1.48671
[3500]  training's l1: 0.558389 valid_1's l1: 1.47835
[4000]  training's l1: 0.493709 valid_1's l1: 1.47075
[4500]  training's l1: 0.438317 valid_1's l1: 1.46572
[5000]  training's l1: 0.390185 valid_1's l1: 1.46074
[5500]  training's l1: 0.348544 valid_1's l1: 1.45705
[6000]  training's l1: 0.312298 valid_1's l1: 1.45445
[6500]  training's l1: 0.280644 valid_1's l1: 1.45147
[7000]  training's l1: 0.252295 valid_1's l1: 1.44902
[7500]  training's l1: 0.227406 valid_1's l1: 1.44728
[8000]  training's l1: 0.205352 valid_1's l1: 1.44564
[8500]  training's l1: 0.185655 valid_1's l1: 1.44452
[9000]  training's l1: 0.167971 valid_1's l1: 1.44347
[9500]  training's l1: 0.152356 valid_1's l1: 1.44235
[10000] training's l1: 0.138266 valid_1's l1: 1.44144
Did not meet early stopping. Best iteration is:
[10000] training's l1: 0.138266 valid_1's l1: 1.44144
Fold 4 started at Sun Jun  9 14:24:41 2019
Training until validation scores don't improve for 200 rounds.
[500]   training's l1: 1.34405  valid_1's l1: 1.57463
[1000]  training's l1: 1.13906  valid_1's l1: 1.54912
[1500]  training's l1: 0.974332 valid_1's l1: 1.52754
[2000]  training's l1: 0.840944 valid_1's l1: 1.51142
[2500]  training's l1: 0.731382 valid_1's l1: 1.49721
[3000]  training's l1: 0.639957 valid_1's l1: 1.48553
[3500]  training's l1: 0.563021 valid_1's l1: 1.47715
[4000]  training's l1: 0.497378 valid_1's l1: 1.47018
[4500]  training's l1: 0.441891 valid_1's l1: 1.46431
[5000]  training's l1: 0.392983 valid_1's l1: 1.45937
[5500]  training's l1: 0.350791 valid_1's l1: 1.45553
[6000]  training's l1: 0.314382 valid_1's l1: 1.45266
[6500]  training's l1: 0.282156 valid_1's l1: 1.45017
[7000]  training's l1: 0.253716 valid_1's l1: 1.44762
[7500]  training's l1: 0.228517 valid_1's l1: 1.44585
[8000]  training's l1: 0.206365 valid_1's l1: 1.44436
[8500]  training's l1: 0.186544 valid_1's l1: 1.44311
[9000]  training's l1: 0.16875  valid_1's l1: 1.442
[9500]  training's l1: 0.152938 valid_1's l1: 1.44094
[10000] training's l1: 0.138899 valid_1's l1: 1.44016
Did not meet early stopping. Best iteration is:
[10000] training's l1: 0.138899 valid_1's l1: 1.44016
Fold 5 started at Sun Jun  9 14:30:10 2019
Training until validation scores don't improve for 200 rounds.
[500]   training's l1: 1.3404   valid_1's l1: 1.57346
[1000]  training's l1: 1.13382  valid_1's l1: 1.54773
[1500]  training's l1: 0.969657 valid_1's l1: 1.52816
[2000]  training's l1: 0.837582 valid_1's l1: 1.51103
[2500]  training's l1: 0.728772 valid_1's l1: 1.49732
[3000]  training's l1: 0.637803 valid_1's l1: 1.48569
[3500]  training's l1: 0.56128  valid_1's l1: 1.47659
[4000]  training's l1: 0.495872 valid_1's l1: 1.46914
[4500]  training's l1: 0.440328 valid_1's l1: 1.46329
[5000]  training's l1: 0.391838 valid_1's l1: 1.45893
[5500]  training's l1: 0.350524 valid_1's l1: 1.4552
[6000]  training's l1: 0.314045 valid_1's l1: 1.45215
[6500]  training's l1: 0.282028 valid_1's l1: 1.44964
[7000]  training's l1: 0.253741 valid_1's l1: 1.44717
[7500]  training's l1: 0.22857  valid_1's l1: 1.44503
[8000]  training's l1: 0.206321 valid_1's l1: 1.44338
[8500]  training's l1: 0.186369 valid_1's l1: 1.44194
[9000]  training's l1: 0.168579 valid_1's l1: 1.44064
[9500]  training's l1: 0.152879 valid_1's l1: 1.43961
[10000] training's l1: 0.138527 valid_1's l1: 1.43887
Did not meet early stopping. Best iteration is:
[10000] training's l1: 0.138527 valid_1's l1: 1.43887
CV mean score: 0.3637, std: 0.0015.
Training of type 3
Fold 1 started at Sun Jun  9 14:35:19 2019
Training until validation scores don't improve for 200 rounds.
[500]   training's l1: 0.265277 valid_1's l1: 0.351694
[1000]  training's l1: 0.191613 valid_1's l1: 0.328247
[1500]  training's l1: 0.147581 valid_1's l1: 0.318357
[2000]  training's l1: 0.117255 valid_1's l1: 0.31304
[2500]  training's l1: 0.094884 valid_1's l1: 0.309855
[3000]  training's l1: 0.0777627    valid_1's l1: 0.307824
[3500]  training's l1: 0.0642895    valid_1's l1: 0.306627
[4000]  training's l1: 0.0534611    valid_1's l1: 0.305747
[4500]  training's l1: 0.0447359    valid_1's l1: 0.305117
[5000]  training's l1: 0.0377019    valid_1's l1: 0.304589
[5500]  training's l1: 0.0318185    valid_1's l1: 0.304213
[6000]  training's l1: 0.0269719    valid_1's l1: 0.303917
[6500]  training's l1: 0.0229437    valid_1's l1: 0.303757
[7000]  training's l1: 0.0195845    valid_1's l1: 0.303595
[7500]  training's l1: 0.0168002    valid_1's l1: 0.303449
[8000]  training's l1: 0.0144635    valid_1's l1: 0.303352
[8500]  training's l1: 0.0124916    valid_1's l1: 0.303261
[9000]  training's l1: 0.0108554    valid_1's l1: 0.303184
[9500]  training's l1: 0.00945344   valid_1's l1: 0.303113
[10000] training's l1: 0.00828106   valid_1's l1: 0.303066
Did not meet early stopping. Best iteration is:
[10000] training's l1: 0.00828106   valid_1's l1: 0.303066
Fold 2 started at Sun Jun  9 14:39:39 2019
Training until validation scores don't improve for 200 rounds.
[500]   training's l1: 0.264385 valid_1's l1: 0.356248
[1000]  training's l1: 0.191845 valid_1's l1: 0.332462
[1500]  training's l1: 0.146798 valid_1's l1: 0.321495
[2000]  training's l1: 0.116825 valid_1's l1: 0.316415
[2500]  training's l1: 0.0944691    valid_1's l1: 0.313221
[3000]  training's l1: 0.0775989    valid_1's l1: 0.311207
[3500]  training's l1: 0.0642915    valid_1's l1: 0.309887
[4000]  training's l1: 0.0535053    valid_1's l1: 0.30893
[4500]  training's l1: 0.0448155    valid_1's l1: 0.308275
[5000]  training's l1: 0.0377257    valid_1's l1: 0.307806
[5500]  training's l1: 0.0319015    valid_1's l1: 0.307427
[6000]  training's l1: 0.0270154    valid_1's l1: 0.307131
[6500]  training's l1: 0.0229938    valid_1's l1: 0.306911
[7000]  training's l1: 0.0196614    valid_1's l1: 0.306722
[7500]  training's l1: 0.0168808    valid_1's l1: 0.306595
[8000]  training's l1: 0.014508 valid_1's l1: 0.306489
[8500]  training's l1: 0.0125456    valid_1's l1: 0.306391
[9000]  training's l1: 0.0109062    valid_1's l1: 0.306319
[9500]  training's l1: 0.00949863   valid_1's l1: 0.306268
[10000] training's l1: 0.00834099   valid_1's l1: 0.306209
Did not meet early stopping. Best iteration is:
[10000] training's l1: 0.00834099   valid_1's l1: 0.306209
Fold 3 started at Sun Jun  9 14:44:08 2019
Training until validation scores don't improve for 200 rounds.
[500]   training's l1: 0.265349 valid_1's l1: 0.357548
[1000]  training's l1: 0.191935 valid_1's l1: 0.333481
[1500]  training's l1: 0.147823 valid_1's l1: 0.323223
[2000]  training's l1: 0.117574 valid_1's l1: 0.317488
[2500]  training's l1: 0.0950112    valid_1's l1: 0.314564
[3000]  training's l1: 0.0777868    valid_1's l1: 0.312452
[3500]  training's l1: 0.0644285    valid_1's l1: 0.311249
[4000]  training's l1: 0.0535317    valid_1's l1: 0.31042
[4500]  training's l1: 0.0446947    valid_1's l1: 0.309676
[5000]  training's l1: 0.0376214    valid_1's l1: 0.309223
[5500]  training's l1: 0.0318099    valid_1's l1: 0.308827
[6000]  training's l1: 0.0269303    valid_1's l1: 0.308552
[6500]  training's l1: 0.0229289    valid_1's l1: 0.308326
[7000]  training's l1: 0.0196071    valid_1's l1: 0.308132
[7500]  training's l1: 0.0168226    valid_1's l1: 0.308017
[8000]  training's l1: 0.0145043    valid_1's l1: 0.307919
[8500]  training's l1: 0.0125255    valid_1's l1: 0.307837
[9000]  training's l1: 0.0108722    valid_1's l1: 0.307766
[9500]  training's l1: 0.00948125   valid_1's l1: 0.307696
[10000] training's l1: 0.00832494   valid_1's l1: 0.307659
Did not meet early stopping. Best iteration is:
[10000] training's l1: 0.00832494   valid_1's l1: 0.307659
Fold 4 started at Sun Jun  9 14:48:38 2019
Training until validation scores don't improve for 200 rounds.
[500]   training's l1: 0.262249 valid_1's l1: 0.353264
[1000]  training's l1: 0.190837 valid_1's l1: 0.331084
[1500]  training's l1: 0.148194 valid_1's l1: 0.321552
[2000]  training's l1: 0.117933 valid_1's l1: 0.31624
[2500]  training's l1: 0.0958392    valid_1's l1: 0.313174
[3000]  training's l1: 0.0784343    valid_1's l1: 0.311075
[3500]  training's l1: 0.0649786    valid_1's l1: 0.309817
[4000]  training's l1: 0.0542034    valid_1's l1: 0.308954
[4500]  training's l1: 0.0453255    valid_1's l1: 0.308295
[5000]  training's l1: 0.038125 valid_1's l1: 0.307729
[5500]  training's l1: 0.0321728    valid_1's l1: 0.307411
[6000]  training's l1: 0.0272527    valid_1's l1: 0.307104
[6500]  training's l1: 0.0232145    valid_1's l1: 0.306897
[7000]  training's l1: 0.0198217    valid_1's l1: 0.306733
[7500]  training's l1: 0.0170046    valid_1's l1: 0.306614
[8000]  training's l1: 0.014647 valid_1's l1: 0.306506
[8500]  training's l1: 0.0126605    valid_1's l1: 0.306416
[9000]  training's l1: 0.0109822    valid_1's l1: 0.306346
[9500]  training's l1: 0.00957009   valid_1's l1: 0.306297
[10000] training's l1: 0.00838388   valid_1's l1: 0.306253
Did not meet early stopping. Best iteration is:
[10000] training's l1: 0.00838388   valid_1's l1: 0.306253
Fold 5 started at Sun Jun  9 14:52:57 2019
Training until validation scores don't improve for 200 rounds.
[500]   training's l1: 0.263565 valid_1's l1: 0.352736
[1000]  training's l1: 0.190135 valid_1's l1: 0.330271
[6500]  training's l1: 0.0230051    valid_1's l1: 0.306011
[7000]  training's l1: 0.0196654    valid_1's l1: 0.305825
[7500]  training's l1: 0.0168776    valid_1's l1: 0.305701
[8000]  training's l1: 0.0145291    valid_1's l1: 0.305576
[8500]  training's l1: 0.0125657    valid_1's l1: 0.305484
[9000]  training's l1: 0.0109113    valid_1's l1: 0.305431
[9500]  training's l1: 0.00951232   valid_1's l1: 0.30536
[10000] training's l1: 0.0083481    valid_1's l1: 0.305312
Did not meet early stopping. Best iteration is:
[10000] training's l1: 0.0083481    valid_1's l1: 0.305312
CV mean score: -1.1852, std: 0.0050.
Training of type 1
Fold 1 started at Sun Jun  9 14:57:19 2019
Training until validation scores don't improve for 200 rounds.
[500]   training's l1: 0.341366 valid_1's l1: 0.799903
[1000]  training's l1: 0.155461 valid_1's l1: 0.778639
[1500]  training's l1: 0.079761 valid_1's l1: 0.771014
[2000]  training's l1: 0.0413945    valid_1's l1: 0.768774
[2500]  training's l1: 0.0215605    valid_1's l1: 0.767671
[3000]  training's l1: 0.0111643    valid_1's l1: 0.76712
[3500]  training's l1: 0.00600824   valid_1's l1: 0.766864
[4000]  training's l1: 0.00365855   valid_1's l1: 0.766761
[4500]  training's l1: 0.00265354   valid_1's l1: 0.766713
[5000]  training's l1: 0.00217407   valid_1's l1: 0.766697
[5500]  training's l1: 0.00189112   valid_1's l1: 0.766686
[6000]  training's l1: 0.00170623   valid_1's l1: 0.766679
[6500]  training's l1: 0.00157466   valid_1's l1: 0.76667
[7000]  training's l1: 0.00147271   valid_1's l1: 0.766663
[7500]  training's l1: 0.00139254   valid_1's l1: 0.766657
[8000]  training's l1: 0.00132549   valid_1's l1: 0.766648
Early stopping, best iteration is:
[8139]  training's l1: 0.00130902   valid_1's l1: 0.766648
Fold 2 started at Sun Jun  9 14:58:44 2019
Training until validation scores don't improve for 200 rounds.
[500]   training's l1: 0.344458 valid_1's l1: 0.801074
[1000]  training's l1: 0.160702 valid_1's l1: 0.775854
[1500]  training's l1: 0.0831951    valid_1's l1: 0.768139
[2000]  training's l1: 0.0433229    valid_1's l1: 0.7653
[2500]  training's l1: 0.0233246    valid_1's l1: 0.764044
[3000]  training's l1: 0.0121463    valid_1's l1: 0.76332
[3500]  training's l1: 0.00646588   valid_1's l1: 0.763001
[4000]  training's l1: 0.00389707   valid_1's l1: 0.762903
[4500]  training's l1: 0.00278159   valid_1's l1: 0.762873
[5000]  training's l1: 0.00223775   valid_1's l1: 0.762839
Early stopping, best iteration is:
[4941]  training's l1: 0.00228583   valid_1's l1: 0.762837
Fold 3 started at Sun Jun  9 14:59:44 2019
Training until validation scores don't improve for 200 rounds.
[500]   training's l1: 0.331772 valid_1's l1: 0.815666
[1000]  training's l1: 0.154818 valid_1's l1: 0.789592
[1500]  training's l1: 0.0790914    valid_1's l1: 0.782418
[2000]  training's l1: 0.0416341    valid_1's l1: 0.779427
[2500]  training's l1: 0.0220298    valid_1's l1: 0.778276
[3000]  training's l1: 0.0116672    valid_1's l1: 0.777827
[3500]  training's l1: 0.00622383   valid_1's l1: 0.777574
[4000]  training's l1: 0.00380443   valid_1's l1: 0.777486
[4500]  training's l1: 0.00273466   valid_1's l1: 0.777436
[5000]  training's l1: 0.00221905   valid_1's l1: 0.777412
[5500]  training's l1: 0.00192379   valid_1's l1: 0.7774
[6000]  training's l1: 0.0017277    valid_1's l1: 0.777393
[6500]  training's l1: 0.00159389   valid_1's l1: 0.777387
Early stopping, best iteration is:
[6390]  training's l1: 0.00161964   valid_1's l1: 0.777385
Fold 4 started at Sun Jun  9 15:00:56 2019
Training until validation scores don't improve for 200 rounds.
[500]   training's l1: 0.344384 valid_1's l1: 0.787171
[1000]  training's l1: 0.160795 valid_1's l1: 0.765228
[1500]  training's l1: 0.0826863    valid_1's l1: 0.758777
[2000]  training's l1: 0.0431186    valid_1's l1: 0.756866
[2500]  training's l1: 0.0225215    valid_1's l1: 0.756194
[3000]  training's l1: 0.0117368    valid_1's l1: 0.755765
[3500]  training's l1: 0.00620292   valid_1's l1: 0.755568
Early stopping, best iteration is:
[3761]  training's l1: 0.00467818   valid_1's l1: 0.755519
Fold 5 started at Sun Jun  9 15:01:41 2019
Training until validation scores don't improve for 200 rounds.
[500]   training's l1: 0.337934 valid_1's l1: 0.815261
[1000]  training's l1: 0.160426 valid_1's l1: 0.788478
[1500]  training's l1: 0.0811551    valid_1's l1: 0.780543
[2000]  training's l1: 0.0424209    valid_1's l1: 0.777246
[2500]  training's l1: 0.0224583    valid_1's l1: 0.776193
[3000]  training's l1: 0.0115374    valid_1's l1: 0.775432
[3500]  training's l1: 0.00615344   valid_1's l1: 0.775165
[4000]  training's l1: 0.00374173   valid_1's l1: 0.775044
[4500]  training's l1: 0.00268872   valid_1's l1: 0.774996
[5000]  training's l1: 0.0021866    valid_1's l1: 0.774966
[5500]  training's l1: 0.00189534   valid_1's l1: 0.774951
[6000]  training's l1: 0.00170905   valid_1's l1: 0.774939
[6500]  training's l1: 0.00157441   valid_1's l1: 0.774933
[7000]  training's l1: 0.00147252   valid_1's l1: 0.774927
[7500]  training's l1: 0.00139234   valid_1's l1: 0.774921
Early stopping, best iteration is:
[7506]  training's l1: 0.00139158   valid_1's l1: 0.774921
CV mean score: -0.2647, std: 0.0104.
Training of type 4
Fold 1 started at Sun Jun  9 15:03:01 2019
Training until validation scores don't improve for 200 rounds.
[500]   training's l1: 0.151242 valid_1's l1: 0.300783
[1000]  training's l1: 0.0852687    valid_1's l1: 0.288796
[1500]  training's l1: 0.0515278    valid_1's l1: 0.28476
[2000]  training's l1: 0.0317767    valid_1's l1: 0.282879
[2500]  training's l1: 0.0201597    valid_1's l1: 0.282115
[3000]  training's l1: 0.0131542    valid_1's l1: 0.281571
[3500]  training's l1: 0.00895598   valid_1's l1: 0.281342
[4000]  training's l1: 0.00632401   valid_1's l1: 0.28122
[4500]  training's l1: 0.00473522   valid_1's l1: 0.281162
[5000]  training's l1: 0.00368829   valid_1's l1: 0.281127
[5500]  training's l1: 0.00300074   valid_1's l1: 0.281099
[6000]  training's l1: 0.00252789   valid_1's l1: 0.281079
[6500]  training's l1: 0.00219436   valid_1's l1: 0.281069
[7000]  training's l1: 0.00195511   valid_1's l1: 0.281067
Early stopping, best iteration is:
[7217]  training's l1: 0.0018753    valid_1's l1: 0.281063
Fold 2 started at Sun Jun  9 15:04:50 2019
Training until validation scores don't improve for 200 rounds.
[500]   training's l1: 0.152773 valid_1's l1: 0.304405
[1000]  training's l1: 0.0843447    valid_1's l1: 0.291371
[1500]  training's l1: 0.0508063    valid_1's l1: 0.287287
[2000]  training's l1: 0.0319118    valid_1's l1: 0.285384
[2500]  training's l1: 0.0205696    valid_1's l1: 0.284496
[3000]  training's l1: 0.013845 valid_1's l1: 0.284044
[3500]  training's l1: 0.00966789   valid_1's l1: 0.283781
[4000]  training's l1: 0.00709673   valid_1's l1: 0.283626
[4500]  training's l1: 0.00537684   valid_1's l1: 0.283542
[5000]  training's l1: 0.00422551   valid_1's l1: 0.283504
[5500]  training's l1: 0.00340777   valid_1's l1: 0.283461
[6000]  training's l1: 0.00276675   valid_1's l1: 0.283438
[6500]  training's l1: 0.00228159   valid_1's l1: 0.283413
[7000]  training's l1: 0.00193039   valid_1's l1: 0.2834
[7500]  training's l1: 0.0016799    valid_1's l1: 0.283391
[8000]  training's l1: 0.00150045   valid_1's l1: 0.283384
Early stopping, best iteration is:
[8280]  training's l1: 0.00142215   valid_1's l1: 0.283382
Fold 3 started at Sun Jun  9 15:06:59 2019
Training until validation scores don't improve for 200 rounds.
[500]   training's l1: 0.154337 valid_1's l1: 0.308026
[1000]  training's l1: 0.0860633    valid_1's l1: 0.294146
[1500]  training's l1: 0.0512994    valid_1's l1: 0.289841
[2000]  training's l1: 0.0316748    valid_1's l1: 0.287882
[2500]  training's l1: 0.0200229    valid_1's l1: 0.286899
[3000]  training's l1: 0.0130242    valid_1's l1: 0.286379
[3500]  training's l1: 0.00870897   valid_1's l1: 0.286069
[4000]  training's l1: 0.00609015   valid_1's l1: 0.285952
[4500]  training's l1: 0.00446288   valid_1's l1: 0.285879
[5000]  training's l1: 0.00343552   valid_1's l1: 0.285826
[5500]  training's l1: 0.00276809   valid_1's l1: 0.285795
[6000]  training's l1: 0.00232135   valid_1's l1: 0.285768
[6500]  training's l1: 0.00201178   valid_1's l1: 0.285753
[7000]  training's l1: 0.00178833   valid_1's l1: 0.285743
[7500]  training's l1: 0.00161248   valid_1's l1: 0.285732
[8000]  training's l1: 0.00146624   valid_1's l1: 0.285725
[8500]  training's l1: 0.00134632   valid_1's l1: 0.285721
[9000]  training's l1: 0.00125379   valid_1's l1: 0.285716
[9500]  training's l1: 0.0011802    valid_1's l1: 0.285711
Early stopping, best iteration is:
[9546]  training's l1: 0.00117416   valid_1's l1: 0.285711
Fold 4 started at Sun Jun  9 15:09:25 2019
Training until validation scores don't improve for 200 rounds.
[500]   training's l1: 0.150856 valid_1's l1: 0.30371
[1000]  training's l1: 0.0849529    valid_1's l1: 0.291771
[1500]  training's l1: 0.0511042    valid_1's l1: 0.288104
[2000]  training's l1: 0.0320314    valid_1's l1: 0.28632
[2500]  training's l1: 0.0201788    valid_1's l1: 0.285473
[3000]  training's l1: 0.013092 valid_1's l1: 0.285012
[3500]  training's l1: 0.00879607   valid_1's l1: 0.284789
[4000]  training's l1: 0.00613946   valid_1's l1: 0.284643
[4500]  training's l1: 0.00451216   valid_1's l1: 0.284562
[5000]  training's l1: 0.00346081   valid_1's l1: 0.284501
[5500]  training's l1: 0.0027652    valid_1's l1: 0.284461
[6000]  training's l1: 0.00229335   valid_1's l1: 0.284434
[6500]  training's l1: 0.0019615    valid_1's l1: 0.284418
[7000]  training's l1: 0.00172982   valid_1's l1: 0.284409
[7500]  training's l1: 0.00156409   valid_1's l1: 0.284402
[8000]  training's l1: 0.00143843   valid_1's l1: 0.284397
[8500]  training's l1: 0.00134042   valid_1's l1: 0.284392
[9000]  training's l1: 0.00125758   valid_1's l1: 0.284388
[9500]  training's l1: 0.00118822   valid_1's l1: 0.284385
[10000] training's l1: 0.00113232   valid_1's l1: 0.284382
Did not meet early stopping. Best iteration is:
[10000] training's l1: 0.00113232   valid_1's l1: 0.284382
Fold 5 started at Sun Jun  9 15:11:53 2019
Training until validation scores don't improve for 200 rounds.
[500]   training's l1: 0.150082 valid_1's l1: 0.301837
[1000]  training's l1: 0.0843511    valid_1's l1: 0.28955
[1500]  training's l1: 0.0501502    valid_1's l1: 0.285434
[2000]  training's l1: 0.0312653    valid_1's l1: 0.283404
[2500]  training's l1: 0.0198909    valid_1's l1: 0.282565
[3000]  training's l1: 0.0129005    valid_1's l1: 0.282156
[3500]  training's l1: 0.00869696   valid_1's l1: 0.281902
[4000]  training's l1: 0.00609355   valid_1's l1: 0.281789
[4500]  training's l1: 0.00449682   valid_1's l1: 0.281719
[5000]  training's l1: 0.00347284   valid_1's l1: 0.281682
[5500]  training's l1: 0.00279567   valid_1's l1: 0.28165
[6000]  training's l1: 0.00234338   valid_1's l1: 0.281635
[6500]  training's l1: 0.00203832   valid_1's l1: 0.281624
[7000]  training's l1: 0.0018407    valid_1's l1: 0.281616
[7500]  training's l1: 0.00169477   valid_1's l1: 0.28161
[8000]  training's l1: 0.00159331   valid_1's l1: 0.281603
Early stopping, best iteration is:
[8259]  training's l1: 0.00154885   valid_1's l1: 0.2816
CV mean score: -1.2615, std: 0.0061.
Training of type 2
Fold 1 started at Sun Jun  9 15:13:57 2019
Training until validation scores don't improve for 200 rounds.
[500]   training's l1: 0.571978 valid_1's l1: 0.646475
[1000]  training's l1: 0.489103 valid_1's l1: 0.617464
[1500]  training's l1: 0.425156 valid_1's l1: 0.596712
[2000]  training's l1: 0.374942 valid_1's l1: 0.581956
[2500]  training's l1: 0.334355 valid_1's l1: 0.571859
[3000]  training's l1: 0.300255 valid_1's l1: 0.5635
[3500]  training's l1: 0.271349 valid_1's l1: 0.557052
[4000]  training's l1: 0.246266 valid_1's l1: 0.552097
[4500]  training's l1: 0.224551 valid_1's l1: 0.548002
[5000]  training's l1: 0.205407 valid_1's l1: 0.544646
[5500]  training's l1: 0.188408 valid_1's l1: 0.541945
[6000]  training's l1: 0.173206 valid_1's l1: 0.539509
[6500]  training's l1: 0.15962  valid_1's l1: 0.537503
[7000]  training's l1: 0.147273 valid_1's l1: 0.535748
[7500]  training's l1: 0.136143 valid_1's l1: 0.534304
[8000]  training's l1: 0.125956 valid_1's l1: 0.53293
[8500]  training's l1: 0.11683  valid_1's l1: 0.531812
[9000]  training's l1: 0.108445 valid_1's l1: 0.53091
[9500]  training's l1: 0.100866 valid_1's l1: 0.530121
[10000] training's l1: 0.093838 valid_1's l1: 0.529431
Did not meet early stopping. Best iteration is:
[10000] training's l1: 0.093838 valid_1's l1: 0.529431
Fold 2 started at Sun Jun  9 15:19:56 2019
Training until validation scores don't improve for 200 rounds.
[500]   training's l1: 0.575205 valid_1's l1: 0.647561
[1000]  training's l1: 0.489786 valid_1's l1: 0.616982
[1500]  training's l1: 0.42548  valid_1's l1: 0.596419
[2000]  training's l1: 0.374957 valid_1's l1: 0.582183
[2500]  training's l1: 0.333994 valid_1's l1: 0.571211
[3000]  training's l1: 0.299905 valid_1's l1: 0.562979
[3500]  training's l1: 0.270879 valid_1's l1: 0.556384
[4000]  training's l1: 0.245965 valid_1's l1: 0.551374
[4500]  training's l1: 0.224261 valid_1's l1: 0.547296
[5000]  training's l1: 0.205045 valid_1's l1: 0.543929
[5500]  training's l1: 0.188211 valid_1's l1: 0.541291
[6000]  training's l1: 0.173134 valid_1's l1: 0.53902
[6500]  training's l1: 0.159426 valid_1's l1: 0.53716
[7000]  training's l1: 0.147198 valid_1's l1: 0.535459
[7500]  training's l1: 0.136107 valid_1's l1: 0.534111
[8000]  training's l1: 0.126059 valid_1's l1: 0.53288
[8500]  training's l1: 0.116847 valid_1's l1: 0.53182
[9000]  training's l1: 0.108439 valid_1's l1: 0.530922
[9500]  training's l1: 0.100719 valid_1's l1: 0.530092
[10000] training's l1: 0.0936075    valid_1's l1: 0.529335
Did not meet early stopping. Best iteration is:
[10000] training's l1: 0.0936075    valid_1's l1: 0.529335
Fold 3 started at Sun Jun  9 15:26:07 2019
Training until validation scores don't improve for 200 rounds.
[500]   training's l1: 0.572911 valid_1's l1: 0.649735
[1000]  training's l1: 0.488002 valid_1's l1: 0.618528
[1500]  training's l1: 0.423814 valid_1's l1: 0.597557
[2000]  training's l1: 0.373397 valid_1's l1: 0.58326
[2500]  training's l1: 0.333175 valid_1's l1: 0.572775
[3000]  training's l1: 0.29912  valid_1's l1: 0.564557
[3500]  training's l1: 0.270255 valid_1's l1: 0.55837
[4000]  training's l1: 0.245371 valid_1's l1: 0.553627
[7500]  training's l1: 0.13556  valid_1's l1: 0.536468
[8000]  training's l1: 0.125504 valid_1's l1: 0.535322
[8500]  training's l1: 0.116367 valid_1's l1: 0.53421
[9000]  training's l1: 0.107925 valid_1's l1: 0.533367
[9500]  training's l1: 0.10032  valid_1's l1: 0.532532
[10000] training's l1: 0.0932935    valid_1's l1: 0.531869
Did not meet early stopping. Best iteration is:
[10000] training's l1: 0.0932935    valid_1's l1: 0.531869
Fold 4 started at Sun Jun  9 15:32:10 2019
Training until validation scores don't improve for 200 rounds.
[500]   training's l1: 0.574097 valid_1's l1: 0.646401
[1000]  training's l1: 0.488854 valid_1's l1: 0.616321
[1500]  training's l1: 0.425925 valid_1's l1: 0.596303
[2000]  training's l1: 0.374932 valid_1's l1: 0.58061
[2500]  training's l1: 0.333752 valid_1's l1: 0.569886
[3000]  training's l1: 0.299567 valid_1's l1: 0.561439
[3500]  training's l1: 0.270781 valid_1's l1: 0.555164
[4000]  training's l1: 0.24586  valid_1's l1: 0.55032
[4500]  training's l1: 0.224343 valid_1's l1: 0.54656
[5000]  training's l1: 0.205054 valid_1's l1: 0.543192
[5500]  training's l1: 0.18808  valid_1's l1: 0.540471
[6000]  training's l1: 0.17287  valid_1's l1: 0.538139
[6500]  training's l1: 0.159327 valid_1's l1: 0.536349
[7000]  training's l1: 0.14712  valid_1's l1: 0.534807
[7500]  training's l1: 0.136032 valid_1's l1: 0.533456
[8000]  training's l1: 0.125951 valid_1's l1: 0.532366
[8500]  training's l1: 0.116717 valid_1's l1: 0.531241
[9000]  training's l1: 0.108326 valid_1's l1: 0.53027
[9500]  training's l1: 0.100604 valid_1's l1: 0.529468
[10000] training's l1: 0.0935628    valid_1's l1: 0.528733
Did not meet early stopping. Best iteration is:
[10000] training's l1: 0.0935628    valid_1's l1: 0.528733
Fold 5 started at Sun Jun  9 15:38:17 2019
Training until validation scores don't improve for 200 rounds.
[500]   training's l1: 0.574581 valid_1's l1: 0.648011
[3500]  training's l1: 0.270411 valid_1's l1: 0.557487
[4000]  training's l1: 0.245553 valid_1's l1: 0.552694
[4500]  training's l1: 0.223867 valid_1's l1: 0.548513
[5000]  training's l1: 0.204606 valid_1's l1: 0.545062
[5500]  training's l1: 0.187703 valid_1's l1: 0.542402
[6000]  training's l1: 0.172655 valid_1's l1: 0.540232
[6500]  training's l1: 0.1591   valid_1's l1: 0.538337
[7000]  training's l1: 0.146807 valid_1's l1: 0.536767
[7500]  training's l1: 0.135765 valid_1's l1: 0.535296
[8000]  training's l1: 0.125693 valid_1's l1: 0.534033
[8500]  training's l1: 0.116525 valid_1's l1: 0.532858
[9000]  training's l1: 0.108113 valid_1's l1: 0.532042
[9500]  training's l1: 0.100454 valid_1's l1: 0.531209
[10000] training's l1: 0.0934404    valid_1's l1: 0.530523
Did not meet early stopping. Best iteration is:
[10000] training's l1: 0.0934404    valid_1's l1: 0.530523
CV mean score: -0.6349, std: 0.0021.
Training of type 6
Fold 1 started at Sun Jun  9 15:44:27 2019
Training until validation scores don't improve for 200 rounds.
[500]   training's l1: 0.272604 valid_1's l1: 0.335733
[1000]  training's l1: 0.208609 valid_1's l1: 0.310256
[1500]  training's l1: 0.169099 valid_1's l1: 0.298352
[2000]  training's l1: 0.140806 valid_1's l1: 0.291833
[2500]  training's l1: 0.119191 valid_1's l1: 0.287486
[3000]  training's l1: 0.102257 valid_1's l1: 0.284611
[3500]  training's l1: 0.0884197    valid_1's l1: 0.282399
[4000]  training's l1: 0.0769072    valid_1's l1: 0.280805
[4500]  training's l1: 0.067241 valid_1's l1: 0.279598
[5000]  training's l1: 0.0590993    valid_1's l1: 0.278659
[5500]  training's l1: 0.0520869    valid_1's l1: 0.277971
[6000]  training's l1: 0.0460206    valid_1's l1: 0.277417
[6500]  training's l1: 0.0407814    valid_1's l1: 0.276964
[7000]  training's l1: 0.0362491    valid_1's l1: 0.276587
[7500]  training's l1: 0.0322904    valid_1's l1: 0.276301
[8000]  training's l1: 0.028832 valid_1's l1: 0.276054
[8500]  training's l1: 0.0258215    valid_1's l1: 0.275811
[9000]  training's l1: 0.0231609    valid_1's l1: 0.27562
[9500]  training's l1: 0.0208   valid_1's l1: 0.275453
[10000] training's l1: 0.0187389    valid_1's l1: 0.275311
Did not meet early stopping. Best iteration is:
[10000] training's l1: 0.0187389    valid_1's l1: 0.275311
Fold 2 started at Sun Jun  9 15:49:31 2019
Training until validation scores don't improve for 200 rounds.
[500]   training's l1: 0.272919 valid_1's l1: 0.335333
[1000]  training's l1: 0.209849 valid_1's l1: 0.311231
[1500]  training's l1: 0.16906  valid_1's l1: 0.298617
[2000]  training's l1: 0.14073  valid_1's l1: 0.291911
[2500]  training's l1: 0.119175 valid_1's l1: 0.287432
[3000]  training's l1: 0.102315 valid_1's l1: 0.284391
[3500]  training's l1: 0.0885131    valid_1's l1: 0.282269
[4000]  training's l1: 0.0770521    valid_1's l1: 0.280645
[4500]  training's l1: 0.0674056    valid_1's l1: 0.279517
[5000]  training's l1: 0.0591956    valid_1's l1: 0.278528
[5500]  training's l1: 0.0521862    valid_1's l1: 0.277831
[6000]  training's l1: 0.0461637    valid_1's l1: 0.277199
[6500]  training's l1: 0.0409597    valid_1's l1: 0.276714
[7000]  training's l1: 0.0363984    valid_1's l1: 0.27636
[7500]  training's l1: 0.0324491    valid_1's l1: 0.276044
[8000]  training's l1: 0.0289845    valid_1's l1: 0.27579
[8500]  training's l1: 0.0259329    valid_1's l1: 0.275566
[9000]  training's l1: 0.0232662    valid_1's l1: 0.275361
[9500]  training's l1: 0.0208958    valid_1's l1: 0.275193
[10000] training's l1: 0.0188112    valid_1's l1: 0.275048
Did not meet early stopping. Best iteration is:
[10000] training's l1: 0.0188112    valid_1's l1: 0.275048
Fold 3 started at Sun Jun  9 15:54:33 2019
Training until validation scores don't improve for 200 rounds.
[500]   training's l1: 0.27325  valid_1's l1: 0.337914
[1000]  training's l1: 0.208833 valid_1's l1: 0.312669
[1500]  training's l1: 0.168673 valid_1's l1: 0.300328
[2000]  training's l1: 0.14048  valid_1's l1: 0.293358
[2500]  training's l1: 0.118918 valid_1's l1: 0.289059
[3000]  training's l1: 0.101951 valid_1's l1: 0.286326
[3500]  training's l1: 0.0880917    valid_1's l1: 0.284371
[4000]  training's l1: 0.0765947    valid_1's l1: 0.282859
[4500]  training's l1: 0.0670081    valid_1's l1: 0.28182
[5000]  training's l1: 0.0588247    valid_1's l1: 0.280935
[5500]  training's l1: 0.0518093    valid_1's l1: 0.280281
[6000]  training's l1: 0.0457544    valid_1's l1: 0.279724
[6500]  training's l1: 0.0405779    valid_1's l1: 0.27928
[7000]  training's l1: 0.0360677    valid_1's l1: 0.278924
[7500]  training's l1: 0.0321185    valid_1's l1: 0.278568
[8000]  training's l1: 0.0286873    valid_1's l1: 0.278302
[8500]  training's l1: 0.0256971    valid_1's l1: 0.278106
[9000]  training's l1: 0.0230165    valid_1's l1: 0.277911
[9500]  training's l1: 0.0206946    valid_1's l1: 0.277741
[10000] training's l1: 0.0186353    valid_1's l1: 0.277597
Did not meet early stopping. Best iteration is:
[10000] training's l1: 0.0186353    valid_1's l1: 0.277597
Fold 4 started at Sun Jun  9 15:59:33 2019
Training until validation scores don't improve for 200 rounds.
[500]   training's l1: 0.271757 valid_1's l1: 0.334185
[1000]  training's l1: 0.207867 valid_1's l1: 0.308479
[1500]  training's l1: 0.168676 valid_1's l1: 0.296644
[2000]  training's l1: 0.140758 valid_1's l1: 0.290104
[2500]  training's l1: 0.119197 valid_1's l1: 0.285892
[3000]  training's l1: 0.102279 valid_1's l1: 0.283371
[3500]  training's l1: 0.0884278    valid_1's l1: 0.281304
[4000]  training's l1: 0.0769993    valid_1's l1: 0.279849
[4500]  training's l1: 0.06734  valid_1's l1: 0.278761
[5000]  training's l1: 0.0591526    valid_1's l1: 0.277845
[5500]  training's l1: 0.0521366    valid_1's l1: 0.277107
[6000]  training's l1: 0.0460583    valid_1's l1: 0.276559
[6500]  training's l1: 0.0408439    valid_1's l1: 0.276133
[7000]  training's l1: 0.036367 valid_1's l1: 0.275768
[7500]  training's l1: 0.0324444    valid_1's l1: 0.27551
[8000]  training's l1: 0.0289656    valid_1's l1: 0.275257
[8500]  training's l1: 0.025915 valid_1's l1: 0.27506
[9000]  training's l1: 0.0232508    valid_1's l1: 0.274882
[9500]  training's l1: 0.020905 valid_1's l1: 0.274715
[10000] training's l1: 0.0188335    valid_1's l1: 0.274592
Did not meet early stopping. Best iteration is:
[10000] training's l1: 0.0188335    valid_1's l1: 0.274592
Fold 5 started at Sun Jun  9 16:04:48 2019
Training until validation scores don't improve for 200 rounds.
[500]   training's l1: 0.273996 valid_1's l1: 0.337295
[1000]  training's l1: 0.209368 valid_1's l1: 0.310392
[1500]  training's l1: 0.169157 valid_1's l1: 0.297806
[2000]  training's l1: 0.140877 valid_1's l1: 0.291026
[2500]  training's l1: 0.119201 valid_1's l1: 0.28653
[3000]  training's l1: 0.102068 valid_1's l1: 0.2835
[3500]  training's l1: 0.0882259    valid_1's l1: 0.281535
[4000]  training's l1: 0.0768563    valid_1's l1: 0.280026
[4500]  training's l1: 0.0672083    valid_1's l1: 0.278969
[5000]  training's l1: 0.059037 valid_1's l1: 0.27805
[5500]  training's l1: 0.0520834    valid_1's l1: 0.277368
[6000]  training's l1: 0.0460075    valid_1's l1: 0.27681
[6500]  training's l1: 0.0408189    valid_1's l1: 0.276391
[7000]  training's l1: 0.0363151    valid_1's l1: 0.276028
[7500]  training's l1: 0.0323577    valid_1's l1: 0.275727
[8000]  training's l1: 0.0289115    valid_1's l1: 0.275515
[8500]  training's l1: 0.0258733    valid_1's l1: 0.275295
[9000]  training's l1: 0.0232107    valid_1's l1: 0.27511
[9500]  training's l1: 0.0208579    valid_1's l1: 0.274948
[10000] training's l1: 0.0187765    valid_1's l1: 0.274803
Did not meet early stopping. Best iteration is:
[10000] training's l1: 0.0187765    valid_1's l1: 0.274803
CV mean score: -1.2893, std: 0.0039.
Training of type 5
Fold 1 started at Sun Jun  9 16:10:27 2019
Training until validation scores don't improve for 200 rounds.
[500]   training's l1: 0.572279 valid_1's l1: 0.623411
[1000]  training's l1: 0.50684  valid_1's l1: 0.600215
[1500]  training's l1: 0.456057 valid_1's l1: 0.584318
[2000]  training's l1: 0.412975 valid_1's l1: 0.571218
[2500]  training's l1: 0.376388 valid_1's l1: 0.560736
[3000]  training's l1: 0.345437 valid_1's l1: 0.553138
[3500]  training's l1: 0.318117 valid_1's l1: 0.546442
[4000]  training's l1: 0.293974 valid_1's l1: 0.541077
[4500]  training's l1: 0.272344 valid_1's l1: 0.536416
[5000]  training's l1: 0.253183 valid_1's l1: 0.532701
[5500]  training's l1: 0.235838 valid_1's l1: 0.529436
[6000]  training's l1: 0.219968 valid_1's l1: 0.526409
[6500]  training's l1: 0.205723 valid_1's l1: 0.523784
[7000]  training's l1: 0.192661 valid_1's l1: 0.521573
[7500]  training's l1: 0.180631 valid_1's l1: 0.519611
[8000]  training's l1: 0.169612 valid_1's l1: 0.517947
[8500]  training's l1: 0.159424 valid_1's l1: 0.516572
[9000]  training's l1: 0.150029 valid_1's l1: 0.515333
[9500]  training's l1: 0.141327 valid_1's l1: 0.514127
[10000] training's l1: 0.133189 valid_1's l1: 0.513038
Did not meet early stopping. Best iteration is:
[10000] training's l1: 0.133189 valid_1's l1: 0.513038
Fold 2 started at Sun Jun  9 16:17:54 2019
Training until validation scores don't improve for 200 rounds.
[500]   training's l1: 0.56993  valid_1's l1: 0.6258
[1000]  training's l1: 0.505774 valid_1's l1: 0.60298
[1500]  training's l1: 0.454484 valid_1's l1: 0.586336
[2000]  training's l1: 0.412204 valid_1's l1: 0.573681
[2500]  training's l1: 0.376241 valid_1's l1: 0.563614
[3000]  training's l1: 0.345228 valid_1's l1: 0.555796
[3500]  training's l1: 0.317998 valid_1's l1: 0.549004
[4000]  training's l1: 0.29385  valid_1's l1: 0.543495
[4500]  training's l1: 0.272635 valid_1's l1: 0.539136
[5000]  training's l1: 0.253256 valid_1's l1: 0.535142
[5500]  training's l1: 0.235893 valid_1's l1: 0.531633
[6000]  training's l1: 0.220007 valid_1's l1: 0.528563
[6500]  training's l1: 0.205772 valid_1's l1: 0.526079
[7000]  training's l1: 0.192798 valid_1's l1: 0.524077
[7500]  training's l1: 0.18089  valid_1's l1: 0.522153
[8000]  training's l1: 0.169903 valid_1's l1: 0.52051
[8500]  training's l1: 0.159576 valid_1's l1: 0.518978
[9000]  training's l1: 0.150103 valid_1's l1: 0.517585
[9500]  training's l1: 0.141331 valid_1's l1: 0.516413
[10000] training's l1: 0.133253 valid_1's l1: 0.515384
Did not meet early stopping. Best iteration is:
[10000] training's l1: 0.133253 valid_1's l1: 0.515384
Fold 3 started at Sun Jun  9 16:25:06 2019
Training until validation scores don't improve for 200 rounds.
[500]   training's l1: 0.572132 valid_1's l1: 0.623961
[1000]  training's l1: 0.506261 valid_1's l1: 0.599373
[1500]  training's l1: 0.455263 valid_1's l1: 0.583646
[2000]  training's l1: 0.41282  valid_1's l1: 0.570989
[2500]  training's l1: 0.376918 valid_1's l1: 0.561367
[3000]  training's l1: 0.345632 valid_1's l1: 0.553512
[3500]  training's l1: 0.318321 valid_1's l1: 0.54695
[4000]  training's l1: 0.294219 valid_1's l1: 0.541193
[4500]  training's l1: 0.272589 valid_1's l1: 0.536687
[5000]  training's l1: 0.253353 valid_1's l1: 0.533
[5500]  training's l1: 0.236025 valid_1's l1: 0.529738
[6000]  training's l1: 0.220237 valid_1's l1: 0.526708
[6500]  training's l1: 0.206008 valid_1's l1: 0.524276
[7000]  training's l1: 0.192992 valid_1's l1: 0.522152
[7500]  training's l1: 0.180941 valid_1's l1: 0.520204
[8000]  training's l1: 0.169986 valid_1's l1: 0.518582
[8500]  training's l1: 0.159757 valid_1's l1: 0.517089
[9000]  training's l1: 0.150306 valid_1's l1: 0.5158
[9500]  training's l1: 0.141508 valid_1's l1: 0.51465
[10000] training's l1: 0.133416 valid_1's l1: 0.513666
Did not meet early stopping. Best iteration is:
[10000] training's l1: 0.133416 valid_1's l1: 0.513666
Fold 4 started at Sun Jun  9 16:32:21 2019
Training until validation scores don't improve for 200 rounds.
[500]   training's l1: 0.571257 valid_1's l1: 0.625351
[1000]  training's l1: 0.50722  valid_1's l1: 0.60228
[1500]  training's l1: 0.456439 valid_1's l1: 0.5864
[2000]  training's l1: 0.413434 valid_1's l1: 0.573482
[2500]  training's l1: 0.377714 valid_1's l1: 0.563713
[3000]  training's l1: 0.346203 valid_1's l1: 0.55543
[3500]  training's l1: 0.318724 valid_1's l1: 0.548667
[4000]  training's l1: 0.294474 valid_1's l1: 0.543077
[4500]  training's l1: 0.273101 valid_1's l1: 0.538631
[5000]  training's l1: 0.25391  valid_1's l1: 0.534668
[5500]  training's l1: 0.236341 valid_1's l1: 0.531149
[6000]  training's l1: 0.220459 valid_1's l1: 0.527885
[6500]  training's l1: 0.20612  valid_1's l1: 0.525387
[7000]  training's l1: 0.193027 valid_1's l1: 0.523323
[7500]  training's l1: 0.18086  valid_1's l1: 0.521148
[8000]  training's l1: 0.169822 valid_1's l1: 0.51941
[8500]  training's l1: 0.159577 valid_1's l1: 0.517814
[9000]  training's l1: 0.150167 valid_1's l1: 0.51645
[9500]  training's l1: 0.141446 valid_1's l1: 0.515243
[10000] training's l1: 0.133348 valid_1's l1: 0.514239
Did not meet early stopping. Best iteration is:
[10000] training's l1: 0.133348 valid_1's l1: 0.514239
Fold 5 started at Sun Jun  9 16:39:49 2019
Training until validation scores don't improve for 200 rounds.
[500]   training's l1: 0.569731 valid_1's l1: 0.624839
[1000]  training's l1: 0.504484 valid_1's l1: 0.600953
[1500]  training's l1: 0.454807 valid_1's l1: 0.585883
[2000]  training's l1: 0.411912 valid_1's l1: 0.572864
[2500]  training's l1: 0.375648 valid_1's l1: 0.562229
[3000]  training's l1: 0.344627 valid_1's l1: 0.554405
[3500]  training's l1: 0.317459 valid_1's l1: 0.547916
[4000]  training's l1: 0.293207 valid_1's l1: 0.542262
[4500]  training's l1: 0.271676 valid_1's l1: 0.537481
[5000]  training's l1: 0.252329 valid_1's l1: 0.533647
[5500]  training's l1: 0.235001 valid_1's l1: 0.530227
[6000]  training's l1: 0.219637 valid_1's l1: 0.527499
[6500]  training's l1: 0.205344 valid_1's l1: 0.524854
[7000]  training's l1: 0.192327 valid_1's l1: 0.522768
[7500]  training's l1: 0.180345 valid_1's l1: 0.520894
[8000]  training's l1: 0.169351 valid_1's l1: 0.519247
[8500]  training's l1: 0.15921  valid_1's l1: 0.517846
[9000]  training's l1: 0.149783 valid_1's l1: 0.516482
[9500]  training's l1: 0.141163 valid_1's l1: 0.515402
[10000] training's l1: 0.132999 valid_1's l1: 0.514307
Did not meet early stopping. Best iteration is:
[10000] training's l1: 0.132999 valid_1's l1: 0.514307
CV mean score: -0.6653, std: 0.0015.
Training of type 7
Fold 1 started at Sun Jun  9 16:47:37 2019
Training until validation scores don't improve for 200 rounds.
[500]   training's l1: 0.123994 valid_1's l1: 0.225977
[1000]  training's l1: 0.0733401    valid_1's l1: 0.215576
[1500]  training's l1: 0.0468084    valid_1's l1: 0.211943
[2000]  training's l1: 0.0312781    valid_1's l1: 0.210149
[2500]  training's l1: 0.0213837    valid_1's l1: 0.209178
[3000]  training's l1: 0.0148414    valid_1's l1: 0.208638
[3500]  training's l1: 0.0105434    valid_1's l1: 0.20832
[4000]  training's l1: 0.00768349   valid_1's l1: 0.208135
[4500]  training's l1: 0.00576789   valid_1's l1: 0.208026
[5000]  training's l1: 0.00446796   valid_1's l1: 0.207953
[5500]  training's l1: 0.00356161   valid_1's l1: 0.207915
[6000]  training's l1: 0.0029247    valid_1's l1: 0.207879
[6500]  training's l1: 0.00245745   valid_1's l1: 0.207858
[7000]  training's l1: 0.00210955   valid_1's l1: 0.207845
[7500]  training's l1: 0.0018423    valid_1's l1: 0.20783
[8000]  training's l1: 0.00163958   valid_1's l1: 0.207822
[8500]  training's l1: 0.00148319   valid_1's l1: 0.207814
[9000]  training's l1: 0.00135866   valid_1's l1: 0.207806
[9500]  training's l1: 0.00126061   valid_1's l1: 0.2078
[10000] training's l1: 0.00118232   valid_1's l1: 0.207797
Did not meet early stopping. Best iteration is:
[10000] training's l1: 0.00118232   valid_1's l1: 0.207797
Fold 2 started at Sun Jun  9 16:50:40 2019
Training until validation scores don't improve for 200 rounds.
[500]   training's l1: 0.123543 valid_1's l1: 0.22668
[1000]  training's l1: 0.0736621    valid_1's l1: 0.21667
[1500]  training's l1: 0.0475744    valid_1's l1: 0.212818
[2000]  training's l1: 0.0318036    valid_1's l1: 0.21107
[2500]  training's l1: 0.0215693    valid_1's l1: 0.210031
[3000]  training's l1: 0.0149947    valid_1's l1: 0.20948
[3500]  training's l1: 0.0106804    valid_1's l1: 0.209198
[4000]  training's l1: 0.0077988    valid_1's l1: 0.209029
[4500]  training's l1: 0.00585371   valid_1's l1: 0.20891
[5000]  training's l1: 0.00454043   valid_1's l1: 0.208833
[5500]  training's l1: 0.00362181   valid_1's l1: 0.208778
[6000]  training's l1: 0.00296997   valid_1's l1: 0.20874
[6500]  training's l1: 0.00249202   valid_1's l1: 0.208712
[7000]  training's l1: 0.00213462   valid_1's l1: 0.208692
[7500]  training's l1: 0.00186423   valid_1's l1: 0.208679
[8000]  training's l1: 0.00165523   valid_1's l1: 0.208667
[8500]  training's l1: 0.0014942    valid_1's l1: 0.208656
[9000]  training's l1: 0.00137182   valid_1's l1: 0.20865
[9500]  training's l1: 0.00127759   valid_1's l1: 0.208646
[10000] training's l1: 0.00120099   valid_1's l1: 0.20864
Did not meet early stopping. Best iteration is:
[10000] training's l1: 0.00120099   valid_1's l1: 0.20864
Fold 3 started at Sun Jun  9 16:53:41 2019
Training until validation scores don't improve for 200 rounds.
[500]   training's l1: 0.124598 valid_1's l1: 0.227032
[1000]  training's l1: 0.0748491    valid_1's l1: 0.217432
[1500]  training's l1: 0.0479364    valid_1's l1: 0.213805
[2000]  training's l1: 0.0316434    valid_1's l1: 0.211921
[2500]  training's l1: 0.0214457    valid_1's l1: 0.211054
[3000]  training's l1: 0.0149283    valid_1's l1: 0.210484
[3500]  training's l1: 0.010593 valid_1's l1: 0.210156
[4000]  training's l1: 0.00772038   valid_1's l1: 0.20994
[4500]  training's l1: 0.00581195   valid_1's l1: 0.209824
[5000]  training's l1: 0.00450439   valid_1's l1: 0.209734
[5500]  training's l1: 0.0035953    valid_1's l1: 0.209686
[6000]  training's l1: 0.00294812   valid_1's l1: 0.209654
[6500]  training's l1: 0.00248525   valid_1's l1: 0.209628
[7000]  training's l1: 0.00213857   valid_1's l1: 0.209606
[7500]  training's l1: 0.0018726    valid_1's l1: 0.209591
[8000]  training's l1: 0.0016568    valid_1's l1: 0.20958
[8500]  training's l1: 0.00148937   valid_1's l1: 0.209573
[9000]  training's l1: 0.00136357   valid_1's l1: 0.209565
Early stopping, best iteration is:
[9059]  training's l1: 0.00134996   valid_1's l1: 0.209564
Fold 4 started at Sun Jun  9 16:56:27 2019
Training until validation scores don't improve for 200 rounds.
[500]   training's l1: 0.12463  valid_1's l1: 0.227571
[1000]  training's l1: 0.0744217    valid_1's l1: 0.217385
[1500]  training's l1: 0.0476088    valid_1's l1: 0.213662
[2000]  training's l1: 0.0317728    valid_1's l1: 0.211881
[2500]  training's l1: 0.0216504    valid_1's l1: 0.210919
[3000]  training's l1: 0.0150455    valid_1's l1: 0.210378
[3500]  training's l1: 0.0106937    valid_1's l1: 0.210058
[4000]  training's l1: 0.00778756   valid_1's l1: 0.209863
[4500]  training's l1: 0.0058482    valid_1's l1: 0.209752
[5000]  training's l1: 0.00452591   valid_1's l1: 0.209672
[5500]  training's l1: 0.00361374   valid_1's l1: 0.209627
[6000]  training's l1: 0.00296727   valid_1's l1: 0.209587
[6500]  training's l1: 0.00250317   valid_1's l1: 0.209559
[7000]  training's l1: 0.00216172   valid_1's l1: 0.20954
[7500]  training's l1: 0.00189865   valid_1's l1: 0.209526
[8000]  training's l1: 0.00168313   valid_1's l1: 0.209514
[8500]  training's l1: 0.00151098   valid_1's l1: 0.209506
[9000]  training's l1: 0.00137927   valid_1's l1: 0.209497
[9500]  training's l1: 0.00127937   valid_1's l1: 0.20949
[10000] training's l1: 0.00119746   valid_1's l1: 0.209488
Did not meet early stopping. Best iteration is:
[10000] training's l1: 0.00119746   valid_1's l1: 0.209488
Fold 5 started at Sun Jun  9 16:59:23 2019
Training until validation scores don't improve for 200 rounds.
[500]   training's l1: 0.122723 valid_1's l1: 0.227353
[1000]  training's l1: 0.0730889    valid_1's l1: 0.216988
[1500]  training's l1: 0.0468   valid_1's l1: 0.213145
[2000]  training's l1: 0.0311629    valid_1's l1: 0.211299
[2500]  training's l1: 0.0211083    valid_1's l1: 0.210422
[3000]  training's l1: 0.0146975    valid_1's l1: 0.20992
[3500]  training's l1: 0.0104378    valid_1's l1: 0.209637
[4000]  training's l1: 0.00762643   valid_1's l1: 0.209463
[4500]  training's l1: 0.00574444   valid_1's l1: 0.209347
[5000]  training's l1: 0.00445972   valid_1's l1: 0.209272
[5500]  training's l1: 0.00357331   valid_1's l1: 0.209219
[6000]  training's l1: 0.00293556   valid_1's l1: 0.209177
[6500]  training's l1: 0.00246712   valid_1's l1: 0.20915
[7000]  training's l1: 0.00211818   valid_1's l1: 0.209134
[7500]  training's l1: 0.00185029   valid_1's l1: 0.20912
[8000]  training's l1: 0.00164493   valid_1's l1: 0.209109
[8500]  training's l1: 0.00148587   valid_1's l1: 0.2091
[9000]  training's l1: 0.00136316   valid_1's l1: 0.209096
[9500]  training's l1: 0.0012658    valid_1's l1: 0.209091
[10000] training's l1: 0.00118891   valid_1's l1: 0.209087
Did not meet early stopping. Best iteration is:
[10000] training's l1: 0.00118891   valid_1's l1: 0.209087
CV mean score: -1.5658, std: 0.0031