ur-whitelab / hoomd-tf

A plugin that allows the use of Tensorflow in Hoomd-Blue for GPU-accelerated ML+MD
https://hoomd-tf.readthedocs.io
MIT License
30 stars 8 forks source link

Add mapped nlist #317

Closed whitead closed 2 years ago

whitead commented 3 years ago

Set-up Ghost Particles

@RainierBarrett

Create new function that takes in CG mapping and does:

  1. Extract snapshot from user (See note below about warning for types > C)
  2. Edit snapshot to include new mapped beads, their velocities (0), typeids (start at C)
  3. Set system to new snapshot

Related Code:

snapshot = hoomd.data.make_snapshot(N=N,
                                    box=hoomd.data.boxdim(Lx=100,
                                                          Ly=100,
                                                          Lz=1),
                                    particle_types=['A'])
q0 = np.zeros((N,3))
q0[:,0] = ss.norm.rvs(scale=q0bar[1][0], loc=q0bar[0][0], size=N)
q0[:,1] = ss.norm.rvs(scale=q0bar[1][1], loc=q0bar[0][1], size=N)
p0 = np.zeros((N,3))
p0[:,0] = ss.norm.rvs(scale=p0bar[1][0], loc=p0bar[0][0], size=N)
p0[:,1] = ss.norm.rvs(scale=p0bar[1][1], loc=p0bar[0][1], size=N)

snapshot.particles.position[:] = q0
snapshot.particles.velocity[:] = p0
snapshot.particles.typeid[:] = 0
system = hoomd.init.read_snapshot(snapshot)

Maybe snapshot.particles.typeid[N:] = C + CG_bead_ids?

Function signature?:

def enable_mapped_nlist(mapping_operator, snapshot=None, bead_types=None)

bead_types -> CGN x 1, integer of types

Allow writing positions

@whitead

Add a new tf_to_hoomd op in compute_outputs like forces that enables writing positions. This should be from a dense-multiply from cg mappings followed by concat to combine sliced fine grained positions with CG beads?

Psuedo code:

fg_pos = positions[:FGN]
cg_pos = fg_pos @ cg_map
pos = tf.concat((fg_pos, cg_pos))
tf_to_hoomd_op(pos)

See compute_outputs

Make new function in simmodel.py

Signature:

def mapped_nlist(pos, nlist, cg_mapping)

Make note about this being preferred to compute_nlist for in engine training

Details

Have to be careful about order of steps - if we overwrite after C++ updates our nlist will be out of date. We could call computeNlist via C++ in model code but that won't work with a traced @tf.function, which is true of all keras models after compile.

Idea: Make mapped_nlist function be a method of simmodel and add a "pre-compute" python step that is called from tensorflowcompute.py. In there, we check if compute_nlist method was called and execute.

Pseudo code:

def mapped_nlist(self, nlist, pos, cg_mapping):
    # will hit once on trace
    self._map_nlist = True
    self._cg_map = cg_mapping
    ...normal nlist slice out code
    return mapped_nlist

def _pre_compute(self):
    if self._map_nlist:
        # somehow get positions
        ...use self._cg_map
       tf_to_hoomd(pos)

Split nlist by C

Add a new condition on agreement of range of ids (> C || < C) to reshape

Add warning

Need to let user know a range of typeids (> C) is reserved when doing this. Maybe in initial snapshot editing, we can check the max typeid is < C.