w3c / IndexedDB

Indexed Database API
https://w3c.github.io/IndexedDB/
Other
240 stars 62 forks source link

Why is IndexedDB this bad? #369

Closed ghost closed 2 years ago

ghost commented 2 years ago

``Hello everyone, my opinion on IndexedDB might be little too harsh but in my eyes indexedDB is very confusing and annoying to work with. Instead of taking the same approach like with SQL or NoSQL databases like MongoDB or in memory lokiJS someone (Ali Alabbas & Joshua Bell) decided to add cursors that are slower than simply getAll() and looping through everything. If this wasn't enough IndexedDB blocks the main thread with it's cursor. I rather use IndexedDB as a simple store to getAll() and throw everything into LokiJS where I'm able to do queries and actually work with the data.

I truly want to understand why IndexedDB is how it it, why is it blocking the main thread and not running inside a worker instead or why nothing better than a cursor that looks like a forEach() loop with some extra methods like continue()

Why am I opening a transaction to then again specify a store and what I want to add? Yes I understand that I'm able to add into multiple stores with one transaction but this could have been something like Promises and PromiseAll where it could be possible to just throw store "transactions" into a TransactionsAll()

let addUser = db.objectStore('users', 'readwrite').add({...})
let addPost = db.objectStore('posts', 'readwrite').add({...})
let request = await TransactionsAll([addUser, addPost])

Would be much easier to read and to deal with, each objectStore request would be a separate transaction. IndexedDB could actually be great and take so much load of databases to just sync the IndexedDBs with remote DB clusters, but instead we/i have this weird thing.

If I offended anyone, it wasn't my intention to do so and I'm sorry. I'm just little frustrated and confused why IndexedDB is this way.

inexorabletash commented 2 years ago

decided to add cursors that are slower than simply getAll() and looping through everything

getAll() was added after cursors; getAll() also required unbounded memory in the general case, so asynchronous iteration is a necessary primitive.

I rather use IndexedDB as a simple store to getAll() and throw everything into LokiJS where I'm able to do queries and actually work with the data.

Great - if that's working for you, then IndexedDB's design is effective. The intention (predating my involvement) was to provide a low-level key value store on which to build more expressive storage engines.

why is it blocking the main thread

It's not, which is why all requests are asynchronous, returning an IDBRequest and signaling completion using events.

but this could have been something like Promises

IndexedDB predates Promises. Like most problems on the Web Platform, this could be solved with a time machine.

ghost commented 2 years ago

getAll() was added after cursors; getAll() also required unbounded memory in the general case, so asynchronous iteration is a necessary primitive.

So now getAll() should be preferred over cursor? How is MongoDB getting all the data then? Why is MongoDB this much faster than IndexedDB, shouldn't indexedDB be as fast as Redis? Redis and IndexedDB use key:value pairs and are a key value storage. Other way would be to use the ID and get IDs in range but then it would still need a cursor, so it's a dead end.

Great - if that's working for you, then IndexedDB's design is effective. The intention (predating my involvement) was to provide a low-level key value store on which to build more expressive storage engines.

This might be true, but the cursor seem to slow compared to getAll(), isn't there a faster way to search through the objectStore? IndexedDB has it's own DB engine right? So why isn't the cursor written in low level like C and would store the values in BSON?

Also thank you for your time willing to reply

inexorabletash commented 2 years ago

Since cursors return a single value at a time, and returning values is an asynchronous operation, the performance will be lower than getAll() to fetch multiple values. But getAll() may hit memory limits. Some browser engines (e.g. Chrome) optimize cursor operations, pre-fetching multiple results to avoid cross-process hops, but IndexedDB still requires a turn of the event loop to deliver each result.

"Other way would be to use the ID and get IDs in range but then it would still need a cursor, so it's a dead end." - I'm not sure what you mean here. If getAll() works for you to get all records within a key range, great - use it. Constructing indexes is the usual way to avoid unnecessary iteration.

"why isn't the cursor written in low level like C" - you're welcome to inspect the implementation of the current browser engines (Chromium, Gecko, WebKit). You'll find they do all implement IndexedDB in C++ at the moment.

Questions about how to use IndexedDB efficiently probably belong on StackOverflow rather than this repo (hence, leaving this issue closed)