last node: t
curr node: x
next node: y
In local version, "α=1 node" y is the node that there is an edge y->t in graph.
for dst_nbr in sorted(G.neighbors(dst)):
if dst_nbr == src:
unnormalized_probs.append(G[dst][dst_nbr]['weight']/p)
elif G.has_edge(dst_nbr, src):
unnormalized_probs.append(G[dst][dst_nbr]['weight'])
else:
unnormalized_probs.append(G[dst][dst_nbr]['weight']/q)
But in spark version, "α=1 node" y is the node that there is an edge t->y in graph.
val neighbors_ = dstNeighbors.map { case (dstNeighborId, weight) =>
var unnormProb = weight / q
if (srcId == dstNeighborId) unnormProb = weight / p
else if (srcNeighbors.exists(_._1 == dstNeighborId)) unnormProb = weight
(dstNeighborId, unnormProb)
}
last node: t curr node: x next node: y In local version, "α=1 node" y is the node that there is an edge y->t in graph.
But in spark version, "α=1 node" y is the node that there is an edge t->y in graph.
So, which one is correct?