Open kojionrzl opened 3 weeks ago
I also noticed that there is a similar problem for ITensorTDVP.linsolve(...)
, I slightly modified the code of test_linsolve.jl:
Using ITensorMPS
using LinearAlgebra
let
N = 6
sites = siteinds("S=1/2", N; conserve_qns=true)
os = OpSum()
for j in 1:(N - 1)
os += 0.5, "S+", j, "S-", j + 1
os += 0.5, "S-", j, "S+", j + 1
os += "Sz", j, "Sz", j + 1
end
H = MPO(os,sites)
neelstate=[isodd(n) ? "Up" : "Dn" for n in 1:N]
x_c = random_mps(sites, neelstate; linkdims=2)
e, x_c = dmrg(H, x_c; nsweeps=10, cutoff=1e-6, maxdim=20, outputlevel=0)
# Compute `b = H * x_c`
b = apply(H, x_c; cutoff=1e-8)
# Starting guess
x0 = x_c + 0.05 * random_mps(sites,neelstate; linkdims=2)
nsweeps = 10
cutoff = 1e-5
maxdim = 20
updater_kwargs = (; tol=1e-4, maxiter=20, krylovdim=30, ishermitian=true)
x = linsolve(H, b, x0; nsweeps, cutoff, maxdim, updater_kwargs, outputlevel=1, write_when_maxdim_exceeds=10)
@show norm(x - x_c)
end
The error it reported is: ERROR: MethodError: no method matching disk(::ITensorTDVP.ReducedLinearProblem)
If one removes write_when_maxdim_exceeds=10
then it is fine.
Thanks for the report, we'll have to look into that.
With our current design of write-to-disk, it is a bit of a game of wack-a-mole to get write-to-disk working for these different solvers that have different caching structures. The basic challenge is coming up with a good design for writing the tensors that aren't needed until later on in the calculation to disk, and then only reading them from disk when they are needed, while only having to rewrite a minimal amount of the solver code.
I have a new design in mind based on https://github.com/meggart/DiskArrays.jl, where we actually wrap an ITensor
around an AbstractDiskArray
, so then the solver code doesn't even need to be aware of whether or not the tensor lives on disk. That would be a bit easier to implement once https://github.com/ITensor/ITensors.jl/issues/1250 is completed, though would be possible now.
@emstoudenmire
@kmp5VT tagging you since wrapping an ITensor
around an AbstractDiskArray
is related to your work making it easier to wrap various GPU array types in an ITensor.
Here is a minimal code:
And here is the output I got, at sweep 3 it should start writing to disk
The issue appears to be associated with
DiskProjMPO(...)
inabstractprojmpo\diskprojmpo.jl
but it's unclear to me how to proceed from there.