ITensor / ITensorNetworks.jl

A package with general tools for working with higher-dimensional tensor networks based on ITensor.
MIT License
51 stars 12 forks source link

Add support for Adapt which enables converting tensor networks to GPU #187

Closed mtfishman closed 1 month ago

mtfishman commented 1 month ago

With this we can now do:

using Metal: mtl
using NamedGraphs.NamedGraphGenerators: named_grid
using ITensorNetworks: random_tensornetwork, siteinds

g = named_grid((2, 2))
s = siteinds("S=1/2", g)
tn = random_tensornetwork(s)
tn_mtl = mtl(tn)

and we can see the tensors have been moved to GPU:

julia> tn[1, 1]
ITensor ord=3 (dim=2|id=976|"S=1/2,Site,n=1×1") (dim=1|id=434|"1×1,2×1") (dim=1|id=601|"1×1,1×2")
NDTensors.Dense{Float64, Vector{Float64}}

julia> tn_mtl[1, 1]
ITensor ord=3 (dim=2|id=976|"S=1/2,Site,n=1×1") (dim=1|id=434|"1×1,2×1") (dim=1|id=601|"1×1,1×2")
NDTensors.Dense{Float32, Metal.MtlVector{Float32, Metal.MTL.Private}}

@JoeyT1994 with this you should then be able to follow the same instructions that are shown here: https://itensor.github.io/ITensors.jl/dev/RunningOnGPUs.html to run calculation on GPU, for example for gate application before calling apply load the relevant GPU package and transfer the gates and tensor network to GPU using the corresponding conversion function, i.e. cu, mtl, roc, etc.

There may be parts of the library code that aren't generic enough for GPU, for example making implicit assumptions about the element type, constructing intermediate tensors on CPU instead of on the GPU device of the tensor network, etc. We had to go through a process of stamping out those kinds of issues in ITensors.jl and ITensorMPS.jl but they aren't hard to fix using Adapt.adapt.