yanyachen / rBayesianOptimization

Bayesian Optimization of Hyperparameters
81 stars 21 forks source link

Make-over of printed output #31

Open MichaelChirico opened 6 years ago

MichaelChirico commented 6 years ago

Right now, when we optimize a decent number of parameters, the verbose output tends to overrun the allotted options('width') leading to harder-to-ingest clutter like this:

elapsed = 401.33    Round = 1   learning_rate = 0.2420  l1 = 2.7456 l2 = 2.9535 colp = 0.7971   maxd = 3.0000   recent_n = 2.0000   Value = 0.9023 
elapsed = 541.09    Round = 2   learning_rate = 0.2142  l1 = 2.1505 l2 = 1.1346 colp = 0.9648   maxd = 6.0000   recent_n = 6.0000   Value = 0.9216 
elapsed = 440.59    Round = 3   learning_rate = 0.2231  l1 = 0.9104 l2 = 2.3175 colp = 0.5518   maxd = 4.0000   recent_n = 20.0000  Value = 0.9095 
elapsed = 491.44    Round = 4   learning_rate = 0.2762  l1 = 3.6231 l2 = 2.1818 colp = 0.2645   maxd = 5.0000   recent_n = 9.0000   Value = 0.9207 
elapsed = 391.57    Round = 5   learning_rate = 0.4966  l1 = 3.9548 l2 = 3.8570 colp = 0.3406   maxd = 3.0000   recent_n = 1.0000   Value = 0.8996 
elapsed = 605.43    Round = 6   learning_rate = 0.3980  l1 = 3.3862 l2 = 2.7964 colp = 0.3148   maxd = 7.0000   recent_n = 4.0000   Value = 0.9224 
elapsed = 491.11    Round = 7   learning_rate = 0.3946  l1 = 3.8995 l2 = 0.1118 colp = 0.9037   maxd = 5.0000   recent_n = 17.0000  Value = 0.9211 
elapsed = 395.37    Round = 8   learning_rate = 0.2047  l1 = 4.0554 l2 = 0.6690 colp = 0.4470   maxd = 3.0000   recent_n = 18.0000  Value = 0.8969 
elapsed = 658.94    Round = 9   learning_rate = 0.2818  l1 = 4.6576 l2 = 0.9454 colp = 0.3905   maxd = 8.0000   recent_n = 14.0000  Value = 0.9244 
elapsed = 653.89    Round = 10  learning_rate = 0.2750  l1 = 3.2765 l2 = 1.4478 colp = 0.0808   maxd = 8.0000   recent_n = 18.0000  Value = 0.9233

This can be fixed by messing with options('width'), but better seems to me would be to eliminate all the duplicate information like so:

elapse  Round  learning_rate      l1      l2    colp    maxd  recent_n   Value
401.33      1         0.2420  2.7456  2.9535  0.7971  3.0000    2.0000  0.9023 
541.09      2         0.2142  2.1505  1.1346  0.9648  6.0000    6.0000  0.9216 
440.59      3         0.2231  0.9104  2.3175  0.5518  4.0000   20.0000  0.9095 
491.44      4         0.2762  3.6231  2.1818  0.2645  5.0000    9.0000  0.9207 
391.57      5         0.4966  3.9548  3.8570  0.3406  3.0000    1.0000  0.8996 
605.43      6         0.3980  3.3862  2.7964  0.3148  7.0000    4.0000  0.9224 
491.11      7         0.3946  3.8995  0.1118  0.9037  5.0000   17.0000  0.9211 
395.37      8         0.2047  4.0554  0.6690  0.4470  3.0000   18.0000  0.8969 
658.94      9         0.2818  4.6576  0.9454  0.3905  8.0000   14.0000  0.9244 
653.89     10         0.2750  3.2765  1.4478  0.0808  8.0000   18.0000  0.9233

Next, eliminate the spurious zeroes from known integer input:

elapse  Round  learning_rate      l1      l2    colp  maxd  recent_n   Value
401.33      1         0.2420  2.7456  2.9535  0.7971     3         2  0.9023 
541.09      2         0.2142  2.1505  1.1346  0.9648     6         6  0.9216 
440.59      3         0.2231  0.9104  2.3175  0.5518     4        20  0.9095 
491.44      4         0.2762  3.6231  2.1818  0.2645     5         9  0.9207 
391.57      5         0.4966  3.9548  3.8570  0.3406     3         1  0.8996 
605.43      6         0.3980  3.3862  2.7964  0.3148     7         4  0.9224 
491.11      7         0.3946  3.8995  0.1118  0.9037     5        17  0.9211 
395.37      8         0.2047  4.0554  0.6690  0.4470     3        18  0.8969 
658.94      9         0.2818  4.6576  0.9454  0.3905     8        14  0.9244 
653.89     10         0.2750  3.2765  1.4478  0.0808     8        18  0.9233

By my count this is a 40% character reduction in addition to being much more readable.

This is related to #25