Can we use the idea with fitting a polynomial as a cheap alternative to cv? I.e. for different lambda & delta, fit a piecewise parabola through the gains values, with the pieces determined by the current BinarySegmentation found CP. The idea is that when there are no more CP's, the structure will be no longer piecewise convex and the fits can measure this somehow. Also, we expect piecewise convexity to hold better for better values of lambda. To give an example:
n = 1000, delta = 0.1. BinarySegmentation (optimizer = section_search) is called on (1, 1000). Finds CP at say 400. Then we fit a parabola through the gain values in [1, 400] and [400, 1000], save the residuals. If lambda is chosen well, the parabol will be a good fit, with positive 2nd derivative and pos/neg 1st derivative & value 0 at 1 resp 1000.
Repeat this & get different losses for different gamma.
Can we use the idea with fitting a polynomial as a cheap alternative to cv? I.e. for different lambda & delta, fit a piecewise parabola through the gains values, with the pieces determined by the current BinarySegmentation found CP. The idea is that when there are no more CP's, the structure will be no longer piecewise convex and the fits can measure this somehow. Also, we expect piecewise convexity to hold better for better values of lambda. To give an example:
n = 1000, delta = 0.1. BinarySegmentation (optimizer = section_search) is called on (1, 1000). Finds CP at say 400. Then we fit a parabola through the gain values in [1, 400] and [400, 1000], save the residuals. If lambda is chosen well, the parabol will be a good fit, with positive 2nd derivative and pos/neg 1st derivative & value 0 at 1 resp 1000.
Repeat this & get different losses for different gamma.