Closed patwa67 closed 4 years ago
Hey @patwa67, thanks for submitting this.
What is the difference
gamma
is the step size of the proximal-gradient operations which the algorithm is based on, while tau
is the step size of a line search which is performed at every step along quasi-Newton type directions.
In theory (unless bugs are there) these two checks should not fail for your problem at hand; in practice, this may happen when the iterations get very close to a critical point (such as a local minimum). So either (I) there is a bug, or (II) these two checks should be made a little more robust, or (III) the stopping criterion should be made more robust.
and is there some way to get arounf the problem?
You can try switching to PANOC()
as solver, and see whether that works better.
However, I’d be interested in debugging this as soon as I have time: can you share a complete but minimal working example that causes the issue? The snippet above is fine, if you could just make it complete by adding some dummy data (eg randomly generated) with which the issue occurs that would be great!
Thanks again!
Edit: looks like PANOC()
is the default solver, so you can enable it by simply not specifying any solver at all.
Here's an example:
using StructuredOptimization, Random, Statistics
#Simulated data (n observations, p variables, tr truevariables, sig error)
n, p, q, tr_p1, tr_p2, sig = 500, 50000, 2, 20, 10, 0.5
Random.seed!(1)
X = randn(n, p)
Y = randn(n, q)
B_true1 = [randn(tr_p1)..., zeros(p-tr_p1)...]
B_true2 = [randn(tr_p2)..., zeros(p-tr_p2)...]
y1 = X*B_true1 + sig*randn(n)
y2 = X*B_true2 + sig*randn(n)
Y[:,1] = y1
Y[:,2] = y2
Xtrain = X[1:400,:]
Xtest = X[401:500,:]
Ytrain = Y[1:400,:]
Ytest = Y[401:500,:]
function l0(k) # opt function
B = Variable(size(X)[2],size(Y)[2])
@minimize ls(Xtrain*B - Ytrain) st norm(B,0) <= k with ZeroFPR();# solve problem
Bhat = copy(~B)
Ytestpred = Xtest*Bhat
MSEtest = (0.5*norm(Ytestpred-Ytest,2)^2)/(length(Ytest)*size(X)[2])
return MSEtest,Bhat
end
# Function optimize k
function k_loop(k,Xtest,Ytest)
testlossk = zeros((size(X)[2]))
fk = l0(k)
lossfk = fk[1]
testlossk[:] .= lossfk
for iter = 2:100
k=iter
println("k =$k")
fk = l0(k)
lossfk = fk[1]
println("loss =$lossfk")
iter = iter+1
testlossk[iter] = lossfk
end
return testlossk
end
loopres = k_loop(1,Xtest,Ytest) #Run loop over k
@patwa67 I've tagged a new version of ProximalAlgorithms.jl, as soon as that gets merged some of these warnings should be gone (at least the one in the last example you pasted will be)
I'm running the ZeroFPR on a large data set:
and frequently get the following error messages:
and
What is the difference, and is there some way to get arounf the problem?