Open mattorourke17 opened 2 years ago
Looking closer at opt_einsum, I actually don't think the error I quoted is related to the cache
kwarg propagation issue
quimb
should cache on both the contraction equation and sizes - and the returned expression should be able to handle any dimensions changing - so not totally clear the source of the error, but it implies that two different tensors in the same contraction have index dimensions that don't match.
Having said that, I have some refactoring of the contraction parsing stuff that I need to push, including optionally using cotengra
rather than opt_einsum
, to perform the contraction which has various advantages and might be easier to understand.
When calling
TensorNetwork.contract()
or any related function that filters down totensor_contract()
, I don't think it is currently possible to shut off caching of contraction expressions. If the user suppliescache=False
as a kwarg that eventually reachestensor_contract(..., **contract_opts)
, it gets passed toget_contraction(eq, *shapes, cache=True, get='expr', optimize=None, **kwargs)
in the kwargs, whereascache
is already manually set in the function call. The body ofget_contraction()
does not catch the case that acache
value is passed in**kwargs
, and ends up always passingTrue
as thecache
value when calling_get_contraction()
.It is important to be able to shut off caching from the high-level interfaces because sometimes a user might want to contract two networks with the same opt_einsum expression but different values of the bond dimensions, in which case opt_einsum throws an error like
ValueError: Size of label 'g' for operand 6 (2) does not match previous terms (3)
I think this can either be fixed by checking
if 'cache' in kwargs
inget_contraction()
or by checkingif 'cache' in contract_opts
intensor_contract()