Closed tyzhang1993 closed 7 years ago
Thanks Sam. This is a neat addition to Ambit. I have not had a chance to review the code but notices that some of the tests fail because of issues with pyenv on TravisCI. @jturney do you have a patch for that?
Would it make sense to parallel the batching loop?
@lcyyork Good comment, I actually thought about it. There are two points to this comment: 1. Ambit is not intended to handle parallelization for core tensor, which may be handled by lapack/blas libraries or the code calling ambit. 2. Parallelization requires more memory, which conflicts with the current goal of reducing memory footprint.
I think we should go ahead and start with introducing this functionality and making sure it is well tested. Then we can certainly talk about optimization. For example, loops could be batched over a range of s values.
This is great work. Thanks for adding it.
Let me see if I can get the Travis Python errors figured out.
Looks like you're getting gcc from both precise and brew? The cmake and python you can get from conda if you wanted to switch away from pyenv. Or you can get gcc 4.8.5 and 5.2 for mac from conda if that'd do.
This PR should now be ready to go. Travis CI python error has been fixed. @jturney
Great work! Thanks for it!
Description
This PR implements a batching algorithm to reduce the memory footprint of tensor contraction intermediates.
Contraction
C["ijrs"] += 0.5 * B["gar"] * B["gbs"] * T["ijab"]
will generate anA["abrs"]
intermediate tensor, which can be too large to hold in memory. By introducingbatched
syntax,C["ijrs"] += batched("s", 0.5 * B["gar"] * B["gbs"] * T["ijab"])
, which will perform the contraction by batching the indexs
, equivalent to:where only an
A["abr"]
small intermediate tensor need to be generated. This syntax may also loop over multiple batching indices, for exampleC["ijrs"] += batched("rs", 0.5 * B["gar"] * B["gbs"] * T["ijab"])
will only need an intermediate tensor of sizeA["ab"]
.Todos
class LabeledTensorBatchedContraction
class LabeledBlockedTensorBatchedProduct
LabeledTensorBatchedContraction batched(const string &batched_indices, const LabeledTensorContraction &contraction)
LabeledBlockedTensorBatchedProduct batched(const string &batched_indices, const LabeledBlockedTensorProduct &product)
LabeledTensor::contract_batched
LabeledTensor::operator=/+=/-= LabeledTensorBatchedContraction
LabeledBlockedTensor::contract_batched
LabeledBlockedTensor::operator=/+=/-= LabeledBlockedTensorBatchedProduct
Status