entropicalabs / openqaoa

Multi-backend SDK for quantum optimisation
MIT License
119 stars 59 forks source link

Cost vs iterations plot #143

Open raulconchello opened 1 year ago

raulconchello commented 1 year ago

Description

Now when you call q.results.plot_cost() you get a plot of cost vs number of function evaluations. Like the following:

image

It could be interesting to have another method that plots cost vs number of iterations, so as to see the cost at the end of each iteration of the classical optimizer.

Q-lds commented 1 year ago

How would we do this? We would need:

The first part is easy (I think), but the second one ... I mean, that is optimiser dependent and it may not be trivial.

raulconchello commented 1 year ago

With @vishal-ph, we thought it could be nice to have this method when we were looking at a graph like the following:

image

Here the maxiter was 20, so only 20 iterations have been performed but 250 evaluations are displayed. It is rather difficult to see the actual evolution.

I don't think we need to do the average, but just take the last evaluation of each step. This one is the optimized cost if we stop the optimization at that iteration.

For example for spsa:

def SPSA(...):
    ...
    def grad_SPSA(params, c):
        delta = (2*np.random.randint(0, 2, size=len(params))-1)
        return np.real((fun(params + c*delta) - fun(params - c*delta))*delta/(2*c))
    ...
    while improved and not stop and niter < maxiter:
        improved = False
        niter += 1

        # gain sequences
        a = a/(A+niter+1)**alpha
        c = c/(niter+1)**gamma

        # compute gradient descent step
        testx = testx - a*grad_SPSA(testx, c)
        testy = np.real(fun(testx, *args))

        if np.abs(besty-testy) < tol:
            improved = False

        else:
            besty = testy
            bestx = testx
            improved = True

        ...

    return OptimizeResult(fun=besty, x=bestx ....))

You would plot the variable besty vs niter