Open mtfishman opened 1 year ago
exponentiate
(Feel free to move this up – I just can't edit your list above.)
Thanks, I added it.
Proposed redesign of internals:
Basically Lander's data structures become the definition of what a sweep is and are generalized to not just store the forward/reverse sign, but any data relevant to that specific point in the sweep.
A related part of this redesign is that the observer system will handle all printing, via custom "printing callbacks" which are appending to the user-provided list of observer functions (this is in addition to 1-4 above).
Here's an issue to track improvements to make to the general tensor network solver code,
alternating_update
, following up on #59.tdvp
solver toKrlyovKit.exponentiate
.default_projected_operator
which generically defines how to convert an input operator to a projected operator, for exampledefault_projected_operator(x::TTN) = ProjTTN(x)
,default_projected_operator(x::TTNSum) = ProjTTNSum(x)
, etc. This could also help with creating certain caches/projected operators when solving different kinds of problems.tdvp
, ideally we don't hard code a list of them (i.e. deprecate thesolver_backend="exponentiate"
/solver_backend="applyexp"
interface) and instead make it easy for users to pass a solver function and solver keyword arguments.t
fromalternating_update
, since it only makes sense for solvers that implement time evolution liketdvp
.tdvp
. How to specify total time, time step, number of steps, which argument ordering?nsweeps
vs.nsteps
.nsteps
makes a bit more sense for a time stepping algorithm liketdvp
, butnsweeps
is more generic (i.e. defines a sweep through the graph). Also a step can involve multiple sweeps in a higher order method.ProjTTN
types to a more generalITensorNetworkCache
type (or specifying custom caching types which are relevant for different contraction backends), based on a contraction sequence tree. Also allow custom contraction sequences and contraction backends, to generalize to optimizing/updating other tensor networks.ProjTTNSum
with a more general lazy sum, based on theApplied
type inITensors.LazyApply
.AbstractITensorNetwork
, and ensure the interface requirements are general enough to work for more general networks.Ax ≈ λx
,Ax ≈ λBx
,Ax ≈ b
,x ≈ y
, etc. which shares caching structures, works with general tensor networks, contraction backends, etc.