pablosanjose / Quantica.jl

Simulation of quantum systems on a lattice
Other
67 stars 8 forks source link

Design of Green functions #1

Closed pablosanjose closed 1 year ago

pablosanjose commented 4 years ago

This issue discusses the design of GreenFunction, which holds the essential spectral information of a Hamiltonian or an EffectiveHamiltonian (see pablosanjose/Elsa.jl#50 for discussion on the latter). With a GreenFunction we should be able to compute global/local density of states, transport or linear response coefficients of a Hamiltonian (the latter would be two-body GreenFunctions). In more general terms, one can use the GreenFunction of a given Hamiltonian to compute the self-energy induced on another in the presence of a coupling.

CC: @fernandopenaranda, @BacAmorim

Workflow

The workflow would typically be the following. You want to compute some quantity that can be expressed in terms of the Green function of a Hamiltonian h. You would do g = greenfunction(h; projector = ..., solver =...), where projector is a projector on a unit cell subspace. Then g(ω, celldst, cellsrc) would yield the Green function as a matrix on the projected space between the two specified cells. In the above, one could replace h::Hamiltonian by an h::EffectiveHamiltonian describing a coupled system. In that case, the g would be restricted to the intra-Hamiltonian propagator, which I expect greatly simplifies the implementation.

(If we would really want the inter-Hamiltonian propagator from subsystem 1 to 2 in a coupled system, it is rather simple to write as G_12 = G_1 * W * g_2 where G_1 includes the self-energy from 2, but g_2 is the decoupled Green function of 2, and W is the coupling.)

In the following we discuss how we could compute g and G of a Hamiltonian and an EffectiveHamiltonian, respectively.

Hamiltonian

Zero-dimensional Hamiltonian

Let us start with the g::GreenFunction of a zero-dimensional h::Hamiltonian, with matrix m = bloch(h). The goal is to be able to obtain the g(ω) projected matrix (alias for g(ω, celldst, cellsrc) with zero celldst and cellsrc). This can be done in a number of ways. I can think of the following:

The preferred solution in many codes, as far as I know, seems to be to use Pardiso (equus) or Mumps (kwant) to compute specific elements of the inverse. Definitely the worse is computing the full inverse and then projecting. In either case we need to do the full calculation again for any ω (as far as I understand). There is no way to reuse the factorization of ω - m for different ωs.

In contrast, a full diagonalization yields g(ω) for all ω's, but it is impossibly slow for large systems (does not exploit sparsity of the problem). KPM also yields all ω's in one go, and is very efficient, but its accuracy depends on the expansion order.

The ideal would be to support as many methods as possible. We would need to set up different GreenFunctionSolvers, each with its relevant information, that is passed using the solver keyword. The we define

struct GreenFunction{H<:Union{Hamiltonian,EffectiveHamiltonian},S<:GreenFunctionSolver{H}}
    solver::S
end
(s::GreenFunction)(ω) = s.solver(ω)

For example, we could have the trivial solver written as

struct InversionSolver{H<:Hamiltonian{<:Any,0},P<:AbstractMatrix} <: AbstractGreenFunctionSolver
    hamiltonian::H
    projector::P
end
(s::InversionSolver)(ω) = P' * inv(ω*I - bloch(s.hamiltonian)) * P

(Recall that type parameters are Hamiltonian{E,L,T,...}.)

Similarly for other methods, storing in each case whatever data is required to produce g(ω) by calling solver(ω).

Unbounded L-dimensional Hamiltonian

In this case we have less options to draw from, since we need to somehow integrate over momenta or otherwise incorporate the infinite character of the Hamiltonian. As mentioned in Workflow, the Green function projected matrix could be computed with a syntax g(ω, celldst, cellsrc).

I can think of the following methods

In view of the above, we could perhaps concentrate on only two generic solvers to start with: exact-diagonalization (small unit cells) and KPM (arbitrary unit cells)

Semibounded Hamiltonians

This case involves the method of images relative to the boundary. The g should then be in general g(ω, celldst, cellsrc) = g´(ω, celldst, cellsrc) - g´(ω, celldst, cellsrc´) + ..., where are unbounded GreenFunctions and cellsrc´ is the specular image of cellsrc on a given boundary.

EffectiveHamiltonian

As discussed in pablosanjose/Elsa.jl#50, we'll probably end up implementing coupled systems in terms of EffectiveHamiltonians, i.e. Hamiltonians with additional self-energies, computed from the g's of attached hamiltonians. The question then arises as to how one should compute G (coupled Green function) from the EffectiveHamiltonian.

Unbounded EffectiveHamiltonian

The core of the problem is to solve Dyson's equation. The coupled Green function reads, symbolically,

G(ω) = P * (ω - h - Σ(ω))^{-1} * P'

where h should be thought of as a block matrix in cell space, with intracell matrices in the diagonal, and Bloch harmonics in the off-diagonals. Σ(ω) is the total self energy, restricted to a given set of cells, and to sites within the P subspace. P is the site projector.

As the above is the inverse of an infinite matrix in cell space it is unclear how to do this directly. We write an alternative form, using P'(ω-h)^{-1}P = g(ω) and Σ(ω) = PΣ(ω)P':

G(ω) = (1 - g(ω) * Σ(ω))^{-1} * g(ω)

In general, then, a simplistic algorithm to get G(ω) would require that we build the g(ω) * Σ(ω) within the finite set C_Σ of cells with non-zero self-energy, invert the finite-size matrix (1 - g(ω) * Σ(ω)), and multiply by g that propagates from C_Σ to any other cell.

Bounded EffectiveHamiltonian

The case of a bounded (0D) EffectiveHamiltonian can also be solved as described above. However, it allows a second approach based on

G(ω) = P * (ω - h - Σ(ω))^{-1} * P'

since the inverted matrix is now of finite size. It has the added advantage that the matrix is typically sparse. In this case we can also follow the direct solve approach, using SparseArrays, Pardiso or Mumps: G(ω) = P * ((ω - h - Σ(ω)) \ δ_i).

It is not clear a priori which method would be preferable. It all depends on how expensive is your method to compute g(ω) of the bounded Hamiltonian (which may involve a single calculation for all ω), as opposed to doing a factorization and direct solve of ω - h - Σ(ω) (which must be repeated for each ω). The best would be to allow both approaches.

BacAmorim commented 4 years ago

This is a good summary of the available methods. I think there will always be a tension between methods that provided the Green's function for all energies (diagonalization and KPM) vs methods that only provide the Green's function for a fixed energy. Whenever possble, I think it is a good idea to provide methods for both cases.

I would add the methods based on recursion and based on propagating states for the evalaution of boundary self-energies for the case of semi-infinite leads. I think a good reference for these is: https://iopscience.iop.org/article/10.1088/0953-8984/16/21/R01

Can we write down a method analogous to the linear-solve approach for periodic Hamiltonians? Maybe writing the linear equation for g(ω, dcell) using Bloch theorem?

I believe we could, but using Bloch theorem would require a summation over k-points in the Brillouin zone. In the same way, If the unit cell is very large, we could combine a Chebyshev expansion of $G_{k}(\omega)$ for each k.

One question, in your simple solver

(s::InversionSolver)(ω) = P' * inv(ω*I - bloch(s.hamiltonian)) * P

Doesn't this first compute the full inverse of ω*I - bloch(s.hamiltonian) and only after that projects it? Shouldn't it be

(s::InversionSolver)(ω) = P' * (ω*I - bloch(s.hamiltonian))\ P

?

pablosanjose commented 4 years ago

Thanks for the reference Bruno, I'll check it out.

Regarding (s::InversionSolver)(ω) = P' * (ω*I - bloch(s.hamiltonian))\ P, yes, that is method 2 in the list. Method 1 of projected inverse is completely useless in practice, although it is conceptually clear.