Source code accompanying 'Mathematics of Epidemics on Networks' by Kiss, Miller, and Simon http://www.springer.com/us/book/9783319508047 . Documentation for the software package is at https://epidemicsonnetworks.readthedocs.io/en/latest/
What is your thought on Parallelizing simulations on distributed machines/ processes?
Large networks take considerable time for calculations.
For example in function _dSIR_individualbased, we could distribute for loop calculations and collect and append output to a list.
for index, (node, Xi, Yi) in enumerate(zip(nodelist,X, Y)):
parallelize :
dX[index] = -Xisum(trans_rate_fxn(node,nbr)Y[index_of_node[nbr]]
for nbr in G.neighbors(node))
dY[index] = -dX[index] - rec_rate_fxn(node)*Yi
collect and append
dV = np.concatenate((dX,dY), axis=0)
What is your thought on Parallelizing simulations on distributed machines/ processes? Large networks take considerable time for calculations.
For example in function _dSIR_individualbased, we could distribute for loop calculations and collect and append output to a list.
for index, (node, Xi, Yi) in enumerate(zip(nodelist,X, Y)): parallelize : dX[index] = -Xisum(trans_rate_fxn(node,nbr)Y[index_of_node[nbr]] for nbr in G.neighbors(node)) dY[index] = -dX[index] - rec_rate_fxn(node)*Yi
collect and append dV = np.concatenate((dX,dY), axis=0)
``
Can this give a significant boost on simulations?