api3dao / api3-dao-dashboard

API3 DAO dashboard
api3.eth/
14 stars 14 forks source link

Paginate proposals and improve loading performance #370

Closed mcoetzee closed 1 year ago

mcoetzee commented 1 year ago

What does this change?

Loading perf results (using mainnet):

Old History page: 5 - 7 seconds

New History page (when navigated via Dashboard page): 400 - 500 milliseconds (10x - 13x improvement)

New History page (cold start, i.e. logging in directly on the History page): 800 - 900 milliseconds

(we preload some data on the Dashboard page)

How did you action this task?

The loading approach: Load as little data as possible, as quickly as possible (i.e. in parallel), in order to get a sorted list of proposal skeletons (for both active and past proposal lists). The data from these 4 calls below gets us there (the StartVote event data is effectively the skeleton data):

        Promise.all([
          api3Voting.primary.queryFilter(api3Voting.primary.filters.StartVote()),
          api3Voting.secondary.queryFilter(api3Voting.secondary.filters.StartVote()),
          convenience.getOpenVoteIds(VOTING_APP_IDS.primary),
          convenience.getOpenVoteIds(VOTING_APP_IDS.secondary),
        ])

The sorted list of proposal skeletons is then paged, and the paged list is then used to determine which primary and secondary proposals to load additional vote data for. The data from the below calls (made in parallel for each proposal type respectively) gives us all the remaining data we need for the proposal list pages:

  Promise.all([
    convenience.getStaticVoteData(VOTING_APP_IDS[type], userAccount, voteIds),
    convenience.getDynamicVoteData(VOTING_APP_IDS[type], userAccount, voteIds),
  ]);

While the additional vote data is busy loading, we show proposal skeletons to the user.

We deliberately omit:

because they aren't required on the proposal list pages and have a considerable performance impact (they get preloaded for the Proposal Details page).

In total this new approach only makes 8 calls to fully load the initial paged list of proposals, and both the first and second group of calls are made in parallel:

| 1st group
|---------->
|------------->
|----------->
|-------------->| 2nd group
                |---------->
                |------------->
                |----------->
                |--------------> Done

Recordings

Old loading perf

https://user-images.githubusercontent.com/747979/207643895-7706e949-ad74-4dbb-a3a5-bb677bbdd288.mov

New loading perf (via Dashboard page)

https://user-images.githubusercontent.com/747979/207644276-963cfa65-5741-4d53-9c19-4a75254ed588.mov

New loading perf (cold start)

https://user-images.githubusercontent.com/747979/207644608-00c4b913-9d44-4dce-be61-f65e6ead042a.mov

Siegrift commented 1 year ago

Btw. do you remember where was the majority of the loading time spent? (or was it just the result of bad cascading fetching?). Either way, this is great UX improvement.

mcoetzee commented 1 year ago

Btw. do you remember where was the majority of the loading time spent? (or was it just the result of bad cascading fetching?). Either way, this is great UX improvement.

Iirc the previous approach had a total of 10 groups of calls made in sequence (so bad cascading), and 4 of those groups of calls were for the EVM script decoding and ENS names. The majority of time was spent over this section:

      const primaryProposals = await getProposals(
        provider,
        convenience,
        userAccount,
        primaryStartVotes,
        primaryOpenVoteIds,
        'primary'
      );
      const secondaryProposals = await getProposals(
        provider,
        convenience,
        userAccount,
        secondaryStartVotes,
        secondaryOpenVoteIds,
        'secondary'
      );
mcoetzee commented 1 year ago

👍 LGTM, I've run locally and it's a great improvement. I haven't noticed any issues. Btw. we are not doing fleek app on the dev branch. That would be nice since it would save me a few hours trying to make the local setup work :/.

Btw. when I wanted to run this locally and run into multiple problems, I've managed to make it work with node@14 and running npm run bootstrap inside the dao repo manually.

I am not sure why this is a problem, since we use the same setup on the CI and I guess you use that also when working on claims... Not sure we can easily update, since that requires modifications in the api3-dao contracts.

That's odd. I have no issues here running any of the code on production / main / dev with the usual "prepare" and "deploy" scripts. I use node v16.

Siegrift commented 1 year ago

That's odd. I have no issues here running any of the code on production / main / dev with the usual "prepare" and "deploy" scripts. I use node v16.

I can try this again after christmas and try to fix it for myself.