Expensify / react-native-onyx

Persistent, offline-first state management solution for React Native. Easy to use with minimal config and boilerplate.
MIT License
160 stars 73 forks source link

Investigate flushing Onyx cache to disk on idle #281

Open marcaaron opened 1 year ago

marcaaron commented 1 year ago

We have observed that IndexedDB can be super slow with writes and cause major lag and jank in the app when there is a huge amount of data to store on App init.

Our solution for now has been to just completely cease storing certain keys entirely. But, another potentially viable solution is to keep the data in memory for some time and then persist it to disk when the user is "idling".

cc @hannojg @kidroca curious if you guys have considered this one yet.

h/t @quinthar for the idea

kidroca commented 1 year ago

Hi @marcaaron,

Technically, it's not so much that IndexedDB is inherently slow, but more so that it's implemented inefficiently by localforage when subjected to frequent write operations.

To illustrate, as noted in this thread: https://expensify.slack.com/archives/C01GTK53T8Q/p1689267792230089?thread_ts=1689036681.307159&cid=C01GTK53T8Q:

"Reviewing the localForage source code for setItem, we can observe that a new transaction is indeed opened each time the function is invoked. This could be a significant contributing factor to the noticeable slowdown experienced when executing multiSet or multiMerge operations. Currently, both these operations use setItem to process each item individually, which could potentially lead to high resource contention and decreased performance."

In light of this, I propose that we consider implementing a storage provider that circumvents the use of localforage. Instead, this provider would interface directly with the IndexedDB API, enabling it to more effectively strategize and optimize transactions. For instance, it could perform a single transaction to update multiple keys as opposed to sequential transactions. This would decrease the likelihood of lock contention and should, in theory, enhance speed significantly. (The change should also let us drop the SyncQueue as we should be able to write in one go, not having to schedule writes one by one)

Alternatively, we could explore other storage driver libraries that might be a better fit for our needs. There exist libraries that streamline the IndexedDB interface, offering ease of use without forfeiting granular control.

I like the idea of batching a few changes and applying them at the best time. It appears that the app does not require immediate data writes. In the worst-case scenario, we might "skip of a beat," and some information might need to be fetched remotely due to delayed persistence.

marcaaron commented 1 year ago

Great thoughts @kidroca. So it seems like our options are:

I do like this 3rd option since it does not seem to difficult to implement something like:

let timer;
const idleTime = 2000; // or whatever we want to use for this

function resetTimer() {
  pauseSetItemQueue();
  clearTimeout(timer);
  timer = setTimeout(processSetItemQueue, idleTime);
}

document.addEventListener('mousemove', resetTimer);
document.addEventListener('keypress', resetTimer);
kidroca commented 1 year ago

@marcaaron I think this library might work better for our case: https://www.npmjs.com/package/idb-keyval the following methods are noteworthy:

Regarding deferring until idle, there's this API available to browsers as well as react-native: https://caniuse.com/requestidlecallback - requestIdleCallback

It accepts callback function that will be executed when the browser becomes idle There's also a numeric timeout parameter in case we want to set a condition like "execute when idle, but no later than {n} milliseconds from now" Requests are otherwise served in a first in first out basis

Safari lacks support for requestIdleCallback, but there's a polyfill to cover the missing API

marcaaron commented 1 year ago

requestIdleCallback looks pretty neat.

hannojg commented 1 year ago

Hey, I somehow overlooked this ticket and was doing a performance audit on a guides account and came to the exact same issue of the massive amount of data we are persisting at app init. Here are my thoughts so far (some of them are maybe out of the box, watch out 🧙‍♂️):

  1. As @kidroca says, indexedDB itself can be pretty fast. The approach I think would be the most valuable to explore is to use indexedDB directly (or a neater layer if that really exists), and optimise the usage for how we need it. I believe this can bring the most performance improvements (and I have a strong itch to start writing a PoC 😄). Why do I believe it can bring the most performance improvements? I just measured it and for my guide account there are over 100.000 (yes) calls to storage.setItem. This also completely spams the microtask queue. Batching all of that together (or at least in a few big transactions) should get most of the performance back I believe.
  2. Move onyx to another thread using web workers. See my thought process about it here. The main idea is that it won't matter to congest the micro task queue if it happens on a different thread (however, this has the downside of the persistence still taking super long and we might loose data if the user closes the browser window prematurely.)
  3. When it comes to running task only when the browser gets idle I think there are better solutions. Check out this code example, which utilises isInputPending()

    
    async function saveSettings () {
    // A task queue of functions
    // Imagine this being tasks e.g. inside onyx where we we execute database operations.
    const tasks = [
    validateForm,
    showSpinner,
    saveToDatabase,
    updateUI,
    sendAnalytics
    ];
    
    // The value of 50 is important here.
    // Any task that takes longer than 50ms is considered
    // too long and blocking. So we don't want our tasks to
    // take longer than that (so the browser has a chance to
    // squeeze in other tasks e.g. handling user interactions)
    let deadline = performance.now() + 50;
    
    while (tasks.length > 0) {
    // Optional chaining operator used here helps to avoid
    // errors in browsers that don't support `isInputPending`:
    if (navigator.scheduling?.isInputPending() || performance.now() >= deadline) {
      // There's a pending user input, or the
      // deadline has been reached. Yield here:
      await yieldToMain();
    
      // Extend the deadline:
      deadline = performance.now() + 50;
    
      // Stop the execution of the current loop and
      // move onto the next iteration:
      continue;
    }
    
    // Shift the task out of the queue:
    const task = tasks.shift();
    
    // Run the task:
    task();
    }
    }```
fedirjh commented 1 year ago

I agree with @kidroca about using the https://www.npmjs.com/package/idb-keyval as a new provider. I have tested this package before, and it was significantly faster compared to localforage. I wrote this POC based on the current approach of queuing the tasks. Although I'm not certain if this is the ideal solution, the results were quite promising.

import {
    get, set, createStore, clear, keys as getKeys, del, getMany, setMany,
} from 'idb-keyval';
import _ from 'underscore';
import fastMerge from '../../fastMerge';
import SyncQueue from '../../SyncQueue';

const dbName = 'OnyxDB';
const storeName = 'keyvaluepairs';

const OnyxStore = createStore(dbName, storeName);

const syncQueue = new SyncQueue(({keyVal, command}) => {
    switch (command) {
        case 'multiSet': {
            const label = `multiSet ${keyVal.length}`;
            // eslint-disable-next-line no-console
            console.time(label);
            return setMany(keyVal, OnyxStore).then(() => {
                // eslint-disable-next-line no-console
                console.timeEnd(label);
                return Promise.resolve();
            });
        }
        case 'multiMerge': {
            const label = `multiMerge ${keyVal.length}`;
            // eslint-disable-next-line no-console
            console.time(label);
            const keys = _.map(keyVal, ([key]) => key);

            // We use getMany to get all existing data in one transaction
            // then we merge the existing data with new one
            // finally we use setMany to update new data in one transaction
            return getMany(keys, OnyxStore)
                .then(values => _.object(_.map(keys, (key, i) => [key, values[i]])))
                .then((existingPairsObject) => {
                    const newPairs = _.map(
                        keyVal,
                        ([key, value]) => {
                            // if value is not an object or an array then overwrite the value stored
                            if (!_.isObject(value) && !_.isArray(value)) {
                                return [key, value];
                            }
                            const existingValue = existingPairsObject[key];
                            const newValue = _.isObject(existingValue) ? ({...fastMerge(existingValue, value)}) : value;
                            return ([key, newValue]);
                        },
                    );
                    return setMany(newPairs, OnyxStore).then(() => {
                        // eslint-disable-next-line no-console
                        console.timeEnd(label);
                        return Promise.resolve();
                    });
                });
        }
        case 'setItem': return set(keyVal[0], keyVal[1], OnyxStore);
        default: return set(keyVal[0], keyVal[1], OnyxStore);
    }
});

const provider = {
    getItem: key => get(key, OnyxStore),
    setItem: (key, value) => syncQueue.push({keyVal: [key, value], command: 'setItem'}),
    clear: () => clear(OnyxStore),
    getAllKeys: () => getKeys(OnyxStore),
    removeItem: key => del(key, OnyxStore),
    multiGet: keys => getMany(keys, OnyxStore).then(values => _.map(keys, (key, i) => [key, values[i]])),
    multiSet: pairs => syncQueue.push({keyVal: pairs, command: 'multiSet'}),
    multiMerge: pairs => syncQueue.push({keyVal: pairs, command: 'multiMerge'}),
};

export default provider;
marcaaron commented 1 year ago

Move onyx to another thread using web workers

Curious to hear more about this. So, would we just defer writes to the worker thread by posting data to the worker and then have those messages get written to IndexedDB? Does it work the same when reading data?

marcaaron commented 1 year ago

this has the downside of the persistence still taking super long and we might loose data if the user closes the browser window prematurely

I am not sure how to get around that. Either have to save the data faster or expect to lose some of it 😄

Losing it might not be much of a concern depending on what is lost, when it would be refreshed again, and what bad things could happen if the data is not persisted (maybe nothing).

marcaaron commented 1 year ago

Maybe as a next step we can try to test idb-keyval with setMany() on one of these inundated accounts to gauge real-world web improvements? Let's also test mobile web?

That would probably give us an idea of what kind of performance improvements we might see from getting off localforage (which I think is kind of a separate issue).

Then we can explore throttling storage after?

FWIW, I think web workers are pretty cool, but most hesitant about this idea since it will add some build complexity.

hannojg commented 1 year ago

Hey 👋 I'd like to take the initiative to test with idb-keyval as we are currently hired to improve the performance, and this would just be my next step 😊

Also I'd still would like to test a OMT performance PoC.

Then we can explore throttling storage after?

Yes! (You are right this is a bit of a separate issue we are talking about atm, but I think that should come first)

So, would we just defer writes to the worker thread by posting data to the worker and then have those messages get written to IndexedDB? Does it work the same when reading data?

Yes, the worker thread would be concerned with writing the data. I also thought an about it reading the data. It might adds a tiny little overhead but then we are keeping more work away from the main thread which is the whole idea of the change

hannojg commented 1 year ago

Okay, here are the first result of a really quick test (will do more tomorrow):

Onyx with localforage

Screenshot 2023-08-02 at 18 30 10

Onyx with idb-keyval/IndexedDB

Screenshot 2023-08-02 at 18 29 45
hannojg commented 1 year ago

Results for 1.000 items:

Onyx right now: 6500ms Onyx with new lib: 143ms

marcaaron commented 1 year ago

Wow. What are we waiting for??? 😄

Seems clearly much better. Next thought would be...

Can we use the same DB? Or do we need some migration to move from localforage to idb-keyval?

Looks very promising though so I think we should go for it.

hannojg commented 1 year ago

I think no migration is needed. I will write a PR today, and check the results for when used in actual product 😊

// Edit: result shared here https://expensify.slack.com/archives/C035J5C9FAP/p1691063111819169

hannojg commented 1 year ago

PR is up 😊

On another note in regards to the "Flush onyx cache to disk on idle": I think another strategy to go about this is to just ignore waiting for the writes to finish. Right now we are using the sequential queue, and the queue gets blocked by waiting for an action to complete its write to the database. The important thing to note here is, that writing to the database isn't a heavy operation. On both native and web we just pass our data to the database provider and from there on it's handled in "the background". However, we are awaiting the finishing of the promises of these write operations, and block the rest of our UI (onyx) operations. I believe we just shouldn't do that. Internally we can have a queue that makes sure data writes get executed in order, however, we shouldn't block the app.

To give you an example, even with idb-keyval writing the data of OpenApp to Onyx takes ~3 seconds. In these three seconds, no other network request (such as OpenReport) will continue executing as we are waiting for the task to finish.

(However! In the advent of idb-keyval, where operations usually take ~10ms, and pagination reducing the size of the network responses, I am not sure how big the benefit of these changes would actually be 🤔 )

marcaaron commented 1 year ago

To give you an example, even with idb-keyval writing the data of OpenApp to Onyx takes ~3 seconds. In these three seconds, no other network request (such as OpenReport) will continue executing as we are waiting for the task to finish.

Hmm if this is true that I agree and not sure I see a clear reason why we would do this. The storage should be able to be "fire and forget". Nothing needs to "wait" on it. FWIW I don't think we were always doing this and if we started at some point it might not have been intentional.

I think we are talking about this code here?

https://github.com/Expensify/App/blob/264d24939c8eff0de99cc81d084742c91ba84ef3/src/libs/Middleware/SaveResponseInOnyx.js#L17-L37

If these "update promises" would resolve after the data has went into the cache then we should not be concerned with whether they are written to storage.

hannojg commented 1 year ago

Yes, I am talking about that code. because then we have the network's sequential queue here:

https://github.com/Expensify/App/blob/ff423aa61fde128e981ee071b01011238927516c/src/libs/Network/SequentialQueue.js#L40-L44

which basically waits until the request is done including to process all middlewares, which includes writing to the disk. Only once that's done we call process() again to process the next request.

marcaaron commented 1 year ago

I don't have a great understanding of why we are waiting there. I think we were not always waiting and the change was introduced in this PR.

Gonna tag in @neil-marcellini and @jasperhuangg to see if they are able to help.

jasperhuangg commented 1 year ago

Hmm I'm honestly not too familiar with why we're waiting on the storage write to finish before continuing down the SequentialQueue. Correct me if I'm wrong, but I feel like once a request is queued its params can't change, even if those params depend on Onyx values that were changed by a a previous request that wrote to Onyx.

I think @neil-marcellini would probably understand this issue a bit better. But I read through the thread and I think it makes sense to use the new library if it provides such dramatic performance improvements. Is there any reason why we would forgo using the new library and remove this "wait for write" stuff instead?

marcaaron commented 1 year ago

Correct me if I'm wrong, but I feel like once a request is queued its params can't change, even if those params depend on Onyx values that were changed by a a previous request that wrote to Onyx.

That is correct. But would you mind clarifying how it relates to this convo? I am having trouble connecting the dots. Thanks!

Is there any reason why we would forgo using the new library and remove this "wait for write" stuff instead?

They're loosely related. We're really discussing two separate considerations in this issue thread.

neil-marcellini commented 1 year ago

we have the network's sequential queue here:

https://github.com/Expensify/App/blob/ff423aa61fde128e981ee071b01011238927516c/src/libs/Network/SequentialQueue.js#L40-L44

which basically waits until the request is done including to process all middlewares, which includes writing to the disk. Only once that's done we call process() again to process the next request.

Yeah that's been in place for a while, and it was actually introduce in this PR https://github.com/Expensify/App/pull/9306. As explained in the comment I implemented that bit of code so that getCurrentRequest works.

I did add also add the queueOnyxUpdates updateHandler. I don't know if we could safely continue with the next request before the data is written to disk, but I'm guessing we probably could.

jasperhuangg commented 1 year ago

@marcaaron I was responding to this secondary consideration:

I think another strategy to go about this is to just ignore waiting for the writes to finish. Right now we are using the sequential queue, and the queue gets blocked by waiting for an action to complete its write to the database. The important thing to note here is, that writing to the database isn't a heavy operation. On both native and web we just pass our data to the database provider and from there on it's handled in "the background". However, we are awaiting the finishing of the promises of these write operations, and block the rest of our UI (onyx) operations.

I was considering the ramifications of if we ignored waiting for writes to finish before unblocking the queue.

marcaaron commented 1 year ago

Ok, so... is everyone saying they agree that we don't actually need to wait for the writes to happen? 😄