[x] pre-compute hook to update positions prior to nlist computation
[x] compute_nlist function to add callable to hook
[x] unit test
[x] documentation
Add a new tf_to_hoomd op in compute_outputs like forces that enables writing positions. This should be from a dense-multiply from cg mappings followed by concat to combine sliced fine grained positions with CG beads?
Make note about this being preferred to compute_nlist for in engine training
Details
Have to be careful about order of steps - if we overwrite after C++ updates our nlist will be out of date. We could call computeNlist via C++ in model code but that won't work with a traced @tf.function, which is true of all keras models after compile.
Idea: Make mapped_nlist function be a method of simmodel and add a "pre-compute" python step that is called from tensorflowcompute.py. In there, we check if compute_nlist method was called and execute.
Pseudo code:
def mapped_nlist(self, nlist, pos, cg_mapping):
# will hit once on trace
self._map_nlist = True
self._cg_map = cg_mapping
...normal nlist slice out code
return mapped_nlist
def _pre_compute(self):
if self._map_nlist:
# somehow get positions
...use self._cg_map
tf_to_hoomd(pos)
Split nlist by C
Add a new condition on agreement of range of ids (> C || < C) to reshape
Add warning
Need to let user know a range of typeids (> C) is reserved when doing this. Maybe in initial snapshot editing, we can check the max typeid is < C.
C = 255
?Set-up Ghost Particles
@RainierBarrett
Create new function that takes in CG mapping and does:
Related Code:
Maybe
snapshot.particles.typeid[N:] = C + CG_bead_ids?
Function signature?:
def enable_mapped_nlist(mapping_operator, snapshot=None, bead_types=None)
bead_types
-> CGN x 1, integer of typesAllow writing positions
@whitead
compute_nlist
function to add callable to hookAdd a new
tf_to_hoomd
op incompute_outputs
like forces that enables writing positions. This should be from a dense-multiply from cg mappings followed by concat to combine sliced fine grained positions with CG beads?Psuedo code:
See
compute_outputs
Make new function in
simmodel.py
Signature:
def mapped_nlist(pos, nlist, cg_mapping)
Make note about this being preferred to
compute_nlist
for in engine trainingDetails
Have to be careful about order of steps - if we overwrite after C++ updates our nlist will be out of date. We could call computeNlist via C++ in model code but that won't work with a traced @tf.function, which is true of all keras models after compile.
Idea: Make
mapped_nlist
function be a method ofsimmodel
and add a "pre-compute" python step that is called from tensorflowcompute.py. In there, we check ifcompute_nlist
method was called and execute.Pseudo code:
Split nlist by C
Add a new condition on agreement of range of ids (
> C || < C
) to reshapeAdd warning
Need to let user know a range of typeids (> C) is reserved when doing this. Maybe in initial snapshot editing, we can check the max typeid is
< C
.