grammyjs / storage-denodb

DenoDB database storage adapter for grammY
GNU General Public License v3.0
3 stars 0 forks source link

Improve write and delete ops performance and correctness #1

Open KnorpelSenf opened 2 years ago

KnorpelSenf commented 2 years ago

There are currently a few things worth improving about the implementations of both write and delete.

  1. They use two requests per operation. This is undesirable for performance reasons.
  2. While technically no two requests for the same key should overlap, this can happen if a user makes mistakes in the update handling. Generally speaking, you may not run two dependent queries against a database without transaction. Hence, if multiple queries per operation are required, they have to be run inside a transaction.

I suggest to change delete to .where('key', key).delete(); (correct?) which should do the DELETE in a single statement.

I further suggest to use a transaction inside the write operation. https://github.com/eveningkid/denodb/issues/211 suggests that UPSERT operations are not supported by DenoDB, so I don't think we can implement this as easily as on other platforms.

zumoshi commented 2 years ago

I rewrote delete to use a single query.

For update, however, as you mentioned DenoDB doesn't seem to support an upsert operation. I tried the following which in theory should've used a single query in most cases needing a second one only for the session creation:

  async write(key: string, value: T) {
    const jsonValue = JSON.stringify(value);
    const update = await SessionJson.where("key", key)
      .update("value", jsonValue) as unknown as { affectedRows: number };
    if (update.affectedRows === 0) {
      await SessionJson.create({
        key,
        value: JSON.stringify(value),
      });
    }
  }

However, when testing I realized that the return value of the update method is connector dependant. The code worked fine for Sqlite (which the tests use) but in Postgres update just returned [] (empty array).

We should probably wait for (or contribute to) DenoDB before optimizing that part. As for using a transaction, I can add one for the update which in theory could prevent race conditions that overwrite the latest change, however, DenoDB also supports MongoDB which doesn't have transactions the way RDBMSs do.

So, using a transaction would basically drop support for that connector:

Transactions are supported on MySQL, MariaDB, SQLite and Postgres (not MongoDB).

(from here)

I don't really want to add conditional statements to change behavior based on the connector type since the whole point of an ORM is to abstract these details. But up to you if you still think I should make changes I will.

KnorpelSenf commented 2 years ago

I don't really have strong opinions on what implementation to choose, that's up to you. I'm rather striving for correctness. Not using a transaction in the way you propose may cause data loss (if the user makes mistakes in configuring their middleware). I agree that platform-specific code is not optimal, but on the other hand a quick branching for Mongo that uses UPSERT, and a transaction otherwise still sounds pretty feasible to me.

KnorpelSenf commented 2 years ago

Any updates on this?

zumoshi commented 2 years ago

I've been digging into denodb and am not very happy with what I've found so far:

The point being, unless I end up forking denodb and sinking some time into it or the original author finding time and motivation to start maintaining it again (even if just reviewing/accepting pull requests) I wouldn't want to use it on my own projects.

Until denodb becomes more production-ready (or I get over my head and attempt to maintain a fork of it) I will probably switch back to nodejs instead of deno and would lack the motivation to maintain a library not used in my own projects.

KnorpelSenf commented 2 years ago

That frustration is understandable. Deno is quite new and the community has yet to grow. ORMs are a very hard problem to tackle, so in combination with poor maintenance, this can be tiring.

I do not think that any of the described issues are a problem for grammY—after all, all we ever need is a simple table that can store string values for string identifiers. I assume denodb to be that usable.

I cannot blame you if you want to drop working on this. This is especially true if you plan on delaying your migrations to Deno for some more time. On the other hand, a storage adapter should not need much maintenance once it's complete. denodb will probably not change the syntax of primitive CRUD statements, and the storage adapter interface will not change, either. In other words: if you decide to get it done, you can drop support for it immediately after, and other people will take care of it :)

KnorpelSenf commented 2 years ago

Also, consider opening an issue (or ten) with your findings at the denodb repo.