We have the code below in analysis.py for low rank approximation of the beta matrix. A couple points:
This looks like it would violate our WT gauge condition, and it might take a little more thinking to get these to be mutually consistent.
We should probably have a test for this (testing if gauge is still maintained after low rank approximation).
I suspect it may not even be possible to make these compatible, given our zeroing of unseen mutations. One of the main advantages of low rank approximations is to help "fill in" parts of the matrix that we haven't observed by postulating shared profiles, but this isn't possible if our gauge mandates unobserved entries are zero. We need to think harder about this.
Picky coding point: this is the only remaining part of analysis.py that has conditions on the presence of under-the-hood model attributes like model_bind and model_stab. As we have done with gauge fixing, this suggests that the low rank approximation operation should be a model method (enforced by the abstract base class), and we can then entirely avoid model introspection in analysis.py.
We have the code below in
analysis.py
for low rank approximation of the beta matrix. A couple points:analysis.py
that has conditions on the presence of under-the-hood model attributes likemodel_bind
andmodel_stab
. As we have done with gauge fixing, this suggests that the low rank approximation operation should be a model method (enforced by the abstract base class), and we can then entirely avoid model introspection inanalysis.py
.https://github.com/matsengrp/torchdms/blob/903ac45267ea9e8ffe6df52e5ec02243548bfe4a/torchdms/analysis.py#L226-L256