Open basnijholt opened 5 years ago
originally posted by Anton Akhmerov (@anton-akhmerov) at 2018-03-10T21:04:28.222Z on GitLab
Related: https://gitlab.kwant-project.org/kwant/kwant/merge_requests/213
Hereby a code that shows the performance difference for using adaptive with sparse diagonalization using adaptive's default loss function (learner1) v.s. a custom loss function abs_min_loss (learner2). Learner2 needs only 30 points to converge, while learner1 needs 1370 points to converge. The plotting code is not included.
import scipy
import scipy.sparse.linalg as sla
import adaptive
adaptive.notebook_extension()
import holoviews as hv
import numpy as np
from functools import partial
import random
from scipy.sparse import identity
np.warnings.filterwarnings('ignore')
def y(a):
H1 = np.matrix([[1.95,-0.64,0,0],[-0.64,0.1,0,0],[0,0,0.71,-0.19],[0,0,-0.19,-0.12]])
H2 = np.matrix([[1,-2*0.64,0,0],[-2*0.64,0.3,0,0],[0,0,0.5*0.71,-0.3*0.19],[0,0,-0.3*0.19,-0.12]])
Ha = a*H1+(1-a)*H2
Hb = np.kron(np.matrix([[1,0],[0,-1]]),Ha)
Hc = scipy.sparse.coo_matrix(Hb)
E = sla.eigsh(Hc, k=7, sigma=-0*0.162, return_eigenvectors=False)
return E
learner1 = adaptive.Learner1D(y, bounds=(0, 3))
runner1 = adaptive.Runner(learner1, goal=lambda l: l.loss() < 0.05)
def abs_min_loss(xs, ys):
from adaptive.learner.learner1D import default_loss
ys = [np.abs(y).min() for y in ys]
return default_loss(xs, ys)
loss = abs_min_loss
learner2 = adaptive.Learner1D(y, bounds=(0, 3), loss_per_interval=loss)
runner2 = adaptive.Runner(learner2, goal=lambda l: l.loss() < 0.05)
(original issue on GitLab)
opened by Rafal Skolasinski (@r-j-skolasinski) at 2017-12-08T13:13:46.873Z
typical (problematic) behaviour in such simulations can be mimic with
which looks (with regular sampling) as