Open mtfishman opened 1 year ago
Also note that along with symbolic expressions we can also support users passing arbitrary Julia functions:
qtn(sin, s)
f(x) = sin(cos(x))
qtn(f, s)
# Could also be anonymous
qtn(x -> sin(x^2), s)
which we can try to compile as best we can. We can even try to simplify those using Symbolics.jl, since we can do:
function qtn(f::Function, s::Vector{<:Index})
@variables x
eq = f(x) # Turn it into an expression
return qtn(eq, s) # Run it through the `qtn` symbolic expression functionality
end
If we decide we don't know what to do with a function or expression, we could punt it off to a general QTN compiler backend, for example something that uses randomized SVD or interpolation methods like TCI which can do their best to compile the function to a QTN by evaluating it at points.
Some other ideas for trying to make the compiler more general would be having a backend to perform Taylor series or polynomial expansions of a function and then compile the resulting polynomial to a tensor network. That functionality is in ITensorQTT.jl here: https://github.com/mtfishman/ITensorQTT.jl/blob/f9ca92354582c6090ed40757b8ec130112eb2463/src/function_to_mps.jl#L53-L71 which is based on the fit
function of Polynomials.jl. Additionally, we could look into packages like TaylorSeries.jl for a symbolic approach.
Relatedly, it could be powerful to expand in other ways, such as with a Fourier expansion, which we could do numerically with packages like FFTW.jl, or see if there is a way to do that symbolically in Julia. The ApproxFun.jl package could be good to try out for performing various series expansions of general functions in terms of ones we can handle (i.e. they support Taylor, Fourier, Chebyshev, etc.).
Related to #9, here is an outline for a proposal for an interface based around a single function (I call it
qtn
below for "quantized tensor network" but open to suggestions) which at a high level accepts symbolic expressions for functions that we want to compile to a quantized tensor network (a tensor network representing the function on a uniform grid in a specified domain using a bit representation):This is based around Symbolics.jl, which seems like a good choice for this package because:
Some extensions of this syntax could be to multivariable functions, for example:
Note I input the variables as another argument, which in principle we could extract from the expression with
Symbolics.get_variables(eq)
but that helps determine which ordering the variables go in the function.That expression could make use of the simplification rule:
which we could call internally as a pre-proccessing step before trying to compile it down to a tensor network using basic rules that we know.
Some other neat features we could make use of are detecting polynomials and extracting the coefficients, see for example Symbolics.degree, Symbolic.coeff, and the function
collect_powers
in this tutorial.Additionally, as I mentioned above, Symbolics.jl has representations of differential operators, so we could have differential operator compilers:
We could make use of FiniteDifferences.jl to get coefficients for higher order finite difference operators, and use adder and subtractor MPOs along with those cofficients to generate arbitrary operators (see Section 5.3.1 of https://arxiv.org/abs/1909.06619).
Additionally, we can have an interface to construct a QFT/DFT tensor network in similar way:
This could lead to even fancier technology, for example a function to compile an entire differential equation in terms of tensor networks, and turn it into a tensor network problem to solve (like a boundary value problem solved with a linear solver from ITensorNetworks.jl, or an eigenvalue problem solved with DMRG). For example:
which could then get solved by tensor network solvers in ITensorNetworks.jl. That would be closely related to the interface of ModelingToolkit.jl, though of course that doesn't use tensor networks to represent the discretization of the operator and functions, but more standard finite difference approaches.
A number of functions for constructing tensor networks for various functions and operators (differential operators, QFT/DFT MPO, linear interpolation MPOs from https://arxiv.org/abs/1802.07259, etc.) exist in https://github.com/mtfishman/ITensorQTT.jl/tree/main, which we should basically be able to just port over but instead output a path graph/tree tensor network based off of the tensor network types in ITensorNetworks.jl and whatever quantized tensor network type we design in this package, which as mentioned in #9 will also store metadata about the location of the bits in the tensor network as well as the domain of the function.