Closed 4psireal2 closed 2 months ago
I assume (not having run your code) that the error comes from there not being a reshape
for TensorMaps. reshape
is a fine paradigm if you have a numerical array without symmetries. With symmetries, fusing all these spaces can be quite an involved operation, and it is not even clear whether this is always well defined if you change the cut across domain and codomain.
The ED implementation does indeed exactly this: it uses an MPS structure to fuse all the physical spaces to the left and to the right of the middle site into a left/right virtual space for the middle tensor that is as big (a.k.a. isomorphic) as the corresponding physical spaces; as such the middle tensor lives in a Hilbert space that is equivalent to the full physical space, and diagonalising the effective Hamiltonian for that middle site is equivalent to exact diagonalisation of the full Hamiltonian.
I happen to have some implementation of this laying around for the Z2 symmetric case with periodic boundaries, but your case should be similar.
I did not run this to check it still works, but I'll attach it here.
The key point is that it is only important to know the action of the Hamiltonian, so if you can apply all terms to the state, krylovkit will do the rest.
In this case, every term is a local contraction, handled here by ncon
. This is just a dynamic version of @tensor
, as I need to specify at which site the term needs to act.
Summing all these terms then gives the desired action.
The implementation of MPSKit on the other hand is actually quite a neat trick. Note that in order to diagonalize the hamiltonian, H * psi = E * psi
, we are completely free to pick what basis we want to do this in. So, a basis transformation (best unitary to preserve inner product etc) characterised by a (unitary) matrix U
changes nothing: U * H * U' * U * psi = E * U * psi
is equivalent. (This is what Jutho already mentioned)
Looking at the contraction for an MPS, if all AL are unitary from left two legs to right, and all AR are unitary from right two legs to left, the tensor at the center expresses the full state in a different basis.
(you can double check that the dimension of this tensor is indeed equal to the full hilbert space dimension)
In particular, here AL and AR can be random unitaries, so the initialization does not matter too much.
Then, we exploit the fact that we do know how to compute the action of the hamiltonian in this basis, as this is also a part of DMRG or other algorithms. This is the ∂∂AC
function, which effectively handles summing all the hamiltonian terms, and applies this to the center site. (basically an implementation of the 1-site version of the contraction here
using TensorKit
import LinearAlgebra
using KrylovKit
using CairoMakie
using Test
## Operators
# ----------
ℋ = Z2Space(0 => 1, 1 => 1)
S_z = TensorMap(zeros, ComplexF64, ℋ, ℋ)
block(S_z, Z2Irrep(0)) .= 1
block(S_z, Z2Irrep(1)) .= -1
S_xx = TensorMap(zeros, ComplexF64, ℋ ⊗ ℋ, ℋ ⊗ ℋ)
S_xx[(Z2Irrep(0), Z2Irrep(1), Z2Irrep(1), Z2Irrep(0))] .= 1
S_xx[(Z2Irrep(1), Z2Irrep(0), Z2Irrep(0), Z2Irrep(1))] .= 1
S_xx[(Z2Irrep(1), Z2Irrep(0), Z2Irrep(0), Z2Irrep(1))] .= 1
S_xx[(Z2Irrep(0), Z2Irrep(1), Z2Irrep(1), Z2Irrep(0))] .= 1
E = id(Matrix{ComplexF64}, domain(S_z))
@info begin
@test convert(Array, S_z) ≈ [1 0; 0 -1]
@test reshape(convert(Array, S_xx), 4, 4) ≈ LinearAlgebra.kron([0 1; 1 0], [0 1; 1 0])
"Operators are correct"
end
"""
apply_Z(ψ, i)
Apply the operator Zᵢ to the state ψ.
"""
function apply_Z(ψ::AbstractTensor, i::Int)
ψ_inds = -collect(1:numind(ψ))
ψ_inds[i] = 1
return ncon([ψ, S_z], [ψ_inds, [-i, 1]])
end
"""
apply_XX(ψ, i)
Apply the operator XᵢXᵢ₊₁ to the state ψ.
"""
function apply_XX(ψ::AbstractTensor, i::Int)
ψ_inds = -collect(1:numind(ψ))
ψ_inds[i] = 1
ψ_inds[mod1(i + 1, end)] = 2
return ncon([ψ, S_xx], [ψ_inds, [-i, -i - 1, 1, 2]])
end
function apply_ising(ψ::AbstractTensor; J=1.0, g=1.0)
return -J * (sum(i -> apply_XX(ψ, i), 1:L) + g * sum(i -> apply_Z(ψ, i), 1:L))
end
## Parameters
## ----------
L = 8
J = 1.0
g = 1.0
num = 28
## Exact Diagonalization
## ---------------------
ψ₀ = normalize!(Tensor(rand, ComplexF64, ℋ^L))
apply_ising(ψ₀)
E_PBC, ψs_PBC, info_PBC = eigsolve(apply_ising, ψ₀, num, :SR)
info_PBC.converged ≥ num || @warn "Not all eigenvalues converged"
scatter(1:num, real.(E_PBC))
Hope this helps!
Many thanks for your prompt response. So I did a simple implementation of ED based on MPSKit's idea for it. It'd be great if you can cast a quick look at it! And thank you for the detailed explanation :)
using TensorKit
using KrylovKit
function applyH1(X::TensorMap, EnvL::TensorMap, op::TensorMap, EnvR::TensorMap)::TensorMap
@tensor X[-1 -2; -3] := EnvL[-1, 1, 4] * X[1, 2, 3] * op[4, -2, 2, 5] * EnvR[3, 5, -3]
return X
end
function findHSum(localH, N)
hEnd = localH
for i = 1 : (N-2)
@tensor hEnd[-1 -2 -3 -4; -5 -6 -7 -8] := hEnd[-1, -2, 1, -5, -6, 2] * localH[2, -3, -4, 1, -7, -8]
fuserOut = isometry(fuse(space(hEnd, 2), space(hEnd, 3)), space(hEnd, 2) * space(hEnd, 3))
fuserIn = isometry(space(hEnd, 6)' * space(hEnd,7)', fuse(space(hEnd, 6), space(hEnd,7)))
@tensor hEnd[-1 -2 -3; -4 -5 -6] := hEnd[-1, 1, 2, -3, -4, 3, 4, -6] * fuserOut[-2, 1, 2] * fuserIn[3, 4, -5]
end
fuserFinIn = isometry(space(hEnd, 4)' * space(hEnd,5)', fuse(space(hEnd, 4), space(hEnd,5)))
fuserFinOut = isometry(fuse(space(hEnd, 2), space(hEnd, 3)), space(hEnd, 2) * space(hEnd, 3))
@tensor hEnd[-1 -2; -3 -4] := hEnd[-1, 1, 2, 3, 4, -4] * fuserFinOut[-2, 1, 2] * fuserFinIn[3, 4, -3]
return hEnd
end
# Test for TFI model
d = 2
N = 5
J = 1.0
g = 1.0
Sx = 0.5 * [0 1; 1 0];
Sz = 0.5 * [1 0; 0 -1];
Id = [1 0; 0 1];
H = -J * kron(Sz, Sz) - g * 0.5 * (kron(Sx, Id) + kron(Id, Sx));
H = TensorMap(H, ComplexSpace(1) ⊗ ComplexSpace(2) ⊗ ComplexSpace(2), ComplexSpace(2) ⊗ ComplexSpace(2) ⊗ ComplexSpace(1))
hSum = findHSum(H, N)
testState = TensorMap(randn, ComplexSpace(1) ⊗ ComplexSpace(d^N), ComplexSpace(1))
testState /norm(testState)
envL = TensorMap(ones, space(testState, 1), space(testState, 1) ⊗ space(hSum, 1))
envR = TensorMap(ones, space(testState, 3)' ⊗ space(hSum, 4)', space(testState, 3)')
eigenVal, eigenVec =
eigsolve(testState, 1, :SR, Lanczos(; tol=1e-16, maxiter=200)) do x
applyH1(x, envL, hSum, envR)
end
From what I can see, that indeed looks about right.
Some small remarks/tips:
The fusing/unfusing you are doing in the loop is not entirely necessary. Because you are effectively creating the dense hamiltonian, and then fusing it, one way you could achieve the same thing is:
H ⊗ Id ⊗ Id ⊗ Id + Id ⊗ H ⊗ Id ⊗ Id + ...
There are some elegant ways to write this in terms of a mapreduce
, which avoids a lot of the fusing work and the contractions:
identities = fill(Id, N-1)
H = sum(1:(N-1)) do i
localoperators = insert!(fill(Id, N-1), i, H)
return reduce(⊗, localoperators)
end
In principle, you can now fuse this to make it fit to your teststate, but at this point you already made the dense hamiltonian anyways, so you might as well "reshape" the teststate to be Tensor(randn, ComplexSpace(d)^N)
and use H * x
as function applications.
As a small remark, you have to be a little careful with the element types, I think for this groundstate working with real numbers is fine, but in general you need complex numbers. It is not a bad idea to just default to complex numbers for quantum systems.
A tolerance of 1e-16
is typically not entirely feasible, this is the smallest difference that can be represented by a single float (1 + 1e-16 != 1
), so if you compare a vector of d^N floating point numbers, you dont really expect that this would be accurate up to 1e-16.
Many thanks for the suggestions! I rewrote my function as follows:
function findHSum(onSiteOp, interactOp, J, g, N; d=2, PBC=false)
if N >= 20
println("Oh no! You are too big for being diagonalized exactly...")
end
iD = Matrix(1.0I, d, d)
identities = fill(iD, N) # Create a vector filled with the identity matrix `iD` of length `L`
onSiteOpList = []
interactOpList = []
for i = 1 : N
onSiteOps = copy(identities)
onSiteOps[i] = onSiteOp
push!(onSiteOpList, reduce(kron, onSiteOps))
interactOps = copy(identities)
interactOps[i] = interactOp
push!(interactOpList, reduce(kron, interactOps))
end
H_onSite = zeros(d^N, d^N)
H_interact = zeros(d^N, d^N)
for i = 1 : (N - 1)
if i == N - 1 && PBC
H_interact += interactOpList[i] * interactOpList[1] # last site = first site
else
H_interact += interactOpList[i] * interactOpList[i+1]
end
end
for i = 1 : N
H_onSite += onSiteOpList[i]
end
return - J * H_interact - g * H_onSite
end
It's less compact in comparison to your approach, but it's somewhat clearer to me what's done :)
I'm following the implementation of exact diagonalization stated here (https://www.tensors.net/exact-diagonalization). I'm now stuck at the step of diagonalizing the Hamiltonian.
since I'm not sure how to do this with
eigsolve
of KrylovKit. Here's my snippet.I'd appreciate any suggestion or help! Also I took a look at MPSKit's implementation of ED (https://github.com/QuantumKitHub/MPSKit.jl/blob/965999ad5d5dd808c2802d59a18557db0a44cb53/src/algorithms/ED.jl#L4). Unfortunately, I don't quite understand what's being done. I think there the MPS is first brought into a mixed canonical state and then the left and right environments are defined by ALs and ARs. I'm not sure about the optimization step of the single middle site (in particular why it should be optimized and what
∂∂AC
does exactly) and how the eigenvalue problem ofH_ac . state.AC[middle_site]
is equivalent to ED. Is it because the single site captures the entire possible hilbert space (d^(N/2 - 1) x d^(N/2) x d), so the optimal orthonormal basis of this subspace would also be orthogonal to all previous Lanczos vectors? Also I'm not sure how theALs
,ARs
, andCLs
are intialized.Many thanks for reading my long question and the great package!