loomnetwork / loomchain

Loom DAppChain Engine
Other
166 stars 32 forks source link

Performance - Unnecessary unmarshalling of candidate address for every delegation #1112

Open joe-bowman opened 5 years ago

joe-bowman commented 5 years ago

In DPOSv3 SlashAndReward() method, we iterate over candidates, and within that loop, further iterate over all delegations. This equates to some tens of thousands of iterations, for which we unmarshall the current candidate's address for comparison purposes.

As we do this already in the outer loop, this is unnecessary overhead.

joe-bowman commented 5 years ago

Of course, I can't push a fix as the repo is read-only :)

--- a/builtin/plugins/dposv3/dpos.go
+++ b/builtin/plugins/dposv3/dpos.go
@@ -1302,7 +1302,7 @@ func rewardAndSlash(ctx contract.Context, state *State) ([]*DelegationResult, er
                                // Distribute rewards to referrers

                                for _, d := range delegations {
-                                       if loom.UnmarshalAddressPB(d.Validator).Compare(loom.UnmarshalAddressPB(candidate.Address)) == 0 {
+                                       if loom.UnmarshalAddressPB(d.Validator).Compare(candidateAddress) == 0 {
                                                delegation, err := GetDelegation(ctx, d.Index, *d.Validator, *d.Delegator)
                                                // if the delegation is not found OR if the delegation
                                                // has no referrer, we do not need to attempt to
joe-bowman commented 5 years ago

Looking at benchmarking, this is likely only causing slowdown in the order of milliseconds, not seconds.

joe-bowman commented 5 years ago

Similar behaviour is observed in various other functions, where we unmarshall an address to compare to every delegation (CheckAllDelegation, ListAllDelegations) where we have the address outside of the loop and therefore calling once and comparing the stored value would be preferable.

mattkanwisher commented 5 years ago

Yeah there was a larger performance problem we created a fix for earlier this week. This optimization is a bit lower priority now as main problem appears gone