Mostly a change to the nomenclature in the document & code. Now we are using "cost" to describe the optimization objective function, "loss" for misfit term, and "regularization" for the rest.
Also should fix a small bug in the accelerated prox-gradient algorithm with regards to how the next iterate is formed.
Ok this all looks good to me, and the Nesterov code indeed seems to correspond to the Accelerated proximal gradient method section beginning p.3-26 in these notes
Mostly a change to the nomenclature in the document & code. Now we are using "cost" to describe the optimization objective function, "loss" for misfit term, and "regularization" for the rest.
Also should fix a small bug in the accelerated prox-gradient algorithm with regards to how the next iterate is formed.