Closed jinlong83 closed 3 years ago
One thing which will still need to be sorted in this PR (If one requires batching) is Issue #169 . (I've described it in detail there.) We need to batch the parameters. It's a little fiddly as parameters are held in a flat numpy array.
One solution would be 2 arrays, one of size [1 x n_glob_params]
for the global parameters (to go in every batch) and one of size [n_states x n_nodal_params]
for the nodal parameters. Then the batching we can just concatenate the former with the slices [nodes_in_batch , : ]
of the latter for the relevant batch.
One thing which will still need to be sorted in this PR (If one requires batching) is Issue #169 . (I've described it in detail there.) We need to batch the parameters. It's a little fiddly as parameters are held in a flat numpy array.
One solution would be 2 arrays, one of size
[1 x n_glob_params]
for the global parameters (to go in every batch) and one of size[n_states x n_nodal_params]
for the nodal parameters. Then the batching we can just concatenate the former with the slices[nodes_in_batch , : ]
of the latter for the relevant batch.
Yes you're right, it turns out that learning transition rates leads to computational bottleneck, and the batching for transition rates learning has been implemented in this PR.
This PR was outdated and has been closed. For more recent developments of learning parameters, please refer to PR #189.
This PR has the following updates: