Closed chaserileyroberts closed 5 years ago
See https://github.com/google/TensorNetwork/issues/145 and https://github.com/google/TensorNetwork/issues/144 for feature requests.
I imagine having wrapper classes containing networks together with an interpretation of the dangling legs. This could be done for wavefunctions and operators, where there is some labelling/ordering of the physical sites; two sets of physical legs for the operator case. The network can otherwise be of arbitrary structure. Then we have some functionality on top of these classes, e.g.:
These operations would just return new tensor networks, which can be contracted later if desired (this would all benefit a lot from contraction-order optimization). This would allow easy quantum experimentation with tensor networks. The nice thing is that all of these operations are very straightforward to implement as abstract operations on tensor networks. This is a bit like how I imagine quimb to work, although I have not used it.
One could also imagine building MPS and other functionality for specific tensor networks on top of this interface: With knowledge about the network structure (e.g. it is an MPS), we can implement more efficient versions of these operations by overriding.
PS: Perhaps "TensorNetwork: Quantum".
I'd keep the core library as focused as possible and clearly separated from applications. As a first basic application, we can implement 0-dimensional quantum systems, where no specific algorithms are needed to perform the operations @amilsted mentions. I would prepare the structure, conventions etc, very carefully and with upcoming extensions in mind.
After that, we can extend the base class to higher dimensions and corresponding network architecture. All of this will require specific, and often intricate, implementations of most methods. However, the basic setup should already provide the architecture such that the plethora of quantum tensor network variations can be added in a consistent way that makes it easy to understand and switch between them.
@MichaelMarien Why specify a number of dimensions at all? I was thinking of a many-body Hilbert space where each dangling leg represents a "site". Dimensionality, as well as the particular tensor network underlying each operator or state, need not be specified for any of the operations above to make sense.
In other words, I should be able to set up the tensor network contained in a "QuantumState" (or "QuantumOperator") object any way I like, as long as it has the right number of dangling legs (of the right dimensions) to match the targeted Hilbert space. I should also be able to interpret each dangling leg however I want: They might represent sites on some spatial lattice, or they might represent electron orbitals in a molecule, or whatever!
@amilsted not sure we are talking about the same thing. You can for sure define and set up a quantum state and interpret it any way you like, but actually doing calculations (for instance expectations of operators) will be very dimension (spatial, not Hilbert space) dependent.
In 0-dim, i.e., undergraduate QM all you need is matrix multiplication + trace, in 1D, (with MPS) you need to start thinking about left and right eigenvectors of the transfer matrix, gauges, etc.
We might be in agreement after all, to set up a class that describes you one-site tensor, this can be done very generally, but all physically interesting things you'd like to do with them, probably require specific implementations, especially when dealing with thermodynamic limit (as it is no longer just a tensor contraction).
@MichaelMarien I think we are indeed very close to agreement. Perhaps we are just using certain terms differently? I think our only difference is that you seem to be proposing that the most basic implementation of the operations above work with just dense matrices and vectors (if I understood correctly), whereas I do not want to presuppose any particular form for the tensor network underlying a state or operator in the most general (and least efficient) version of this code.
For example, a QuantumState might consist of a single dense tensor with many dangling legs (many sites), or it could be a complicated tensor network with just one dangling leg (a 0-dimensional system). Importantly, we can in principle compute all quantities listed above whatever the tensor network looks like! I want to think of this as a toolbox for experimentation with possibly weird tensor networks (not just MPS, PEPS, etc.), and perhaps as a fallback for code that makes more assumptions. This would not be the code to use e.g. for highly efficient manipulation of MPS, or for states of infinite systems, but it would be fine for occasionally doing nonstandard things like, say, taking the inner product of a small finite MPS with a small PEPS as an experiment (because I should be able to do these things if I want to!).
Regarding particular tensor network ansätze for states and operators, we already have code for infinite and finite MPS, MERA, trees etc. in the repository. That code could be modified and added to, so that it provides the same API for basic operations as the hypothetical QuantumState and QuantumOperator classes that do not assume particular tensor network structure. I completely agree with you that we should carefully consider such specializations when we design the basic API.
@amilsted agreed. Sounds very nice to be able to experiment quickly with weird tensor networks and still be able to contract everything with the same API (although a lot slower) than for particular ansätze. Most importantly, we should think carefully about the API and keep it aligned as much as possible throughout the use cases.
Actually, I have an idea. Rather than trying to create a new QuantumTensorNetwork class, what if instead we provide a set of operations that can be applied to a TensorNetwork
object?
The first thing we could try and build would be reduced_density
operation. You would give it the TensorNetwork
object as well as a list of edges to trace out (Or maybe the list of dangling edges that are to remain, I haven't decided yet). The API would look like this:
tensornetwork.quantum.reduced_density(net, list_of_edges)
This would then return a new TensorNetwork
object which when contracted would exactly be the reduced density matrix. The user would still be responsible for contracting the network, but with our new contracting algorithms this shouldn't be too hard.
We could also then prebuild a bunch of quantum operations as node objects, so you could do things like tn.quantum.Hadamard(...)
, tn.quantum.CZ(...)
, tn.quantum.CNOT
, etc.
I think this would be pretty nice to have, and it should be able to integrate with an TensorNetwork
, allowing users to exploit symmetries in operations and build networks that might not exactly be a quantum circuit.
Yes. Of course, we could do both :) The classes could just use these functions internally. The functional interface gives us the advantage (perhaps for JIT or autograph?) of avoiding more container classes, whereas the OO version means we can do operator overloading, e.g. state.conj() @ operator @ state
for an expectation value.
Shame we aren't using Julia, where operator definition/overloading is so much nicer :/
We've had multiple requests to include features that are specific to quantum systems. While we would like to keep
TensorNetwork
as basic and laser focused as possible, it would be useful to have a good tool set for analyzing quantum systems, as I expect that will be the main use case for our users.What features should we include?