The updated formula more precisely accounts for the fact that on the first step D new nodes learn about the message, and in all subsequent steps D-1 new nodes do. I consider a geometric sequence and use the formula for the partial sum of such sequence. In our case, the number of nodes that learn the message in total after n steps is 1 + D + D(D-1) + D(D-1)^2 + .... We can view it as 1 + <partial sum of a geometric sequence> with the first element D and coefficient (D-1). Re-shuffling the terms, I get (in Python notation):
Note: we ignore the case of D=2 because it's an outlier: it corresponds to an arithmetic sequence, not geometric one. Example: a circular topology. Then, for N=1000, we'd need ~500 steps: each step infects just two new nodes. This is unrealistic for Waku, therefore I suggest we omit it.
The updated formula more precisely accounts for the fact that on the first step
D
new nodes learn about the message, and in all subsequent stepsD-1
new nodes do. I consider a geometric sequence and use the formula for the partial sum of such sequence. In our case, the number of nodes that learn the message in total aftern
steps is1 + D + D(D-1) + D(D-1)^2 + ...
. We can view it as1 + <partial sum of a geometric sequence>
with the first elementD
and coefficient(D-1)
. Re-shuffling the terms, I get (in Python notation):which is I use for for figure.
Note: we ignore the case of
D=2
because it's an outlier: it corresponds to an arithmetic sequence, not geometric one. Example: a circular topology. Then, forN=1000
, we'd need ~500 steps: each step infects just two new nodes. This is unrealistic for Waku, therefore I suggest we omit it.