Open KnorpelSenf opened 2 years ago
I rewrote delete to use a single query.
For update, however, as you mentioned DenoDB doesn't seem to support an upsert operation. I tried the following which in theory should've used a single query in most cases needing a second one only for the session creation:
async write(key: string, value: T) {
const jsonValue = JSON.stringify(value);
const update = await SessionJson.where("key", key)
.update("value", jsonValue) as unknown as { affectedRows: number };
if (update.affectedRows === 0) {
await SessionJson.create({
key,
value: JSON.stringify(value),
});
}
}
However, when testing I realized that the return value of the update method is connector dependant. The code worked fine for Sqlite (which the tests use) but in Postgres update just returned []
(empty array).
We should probably wait for (or contribute to) DenoDB before optimizing that part.
As for using a transaction, I can add one for the update which in theory could prevent race conditions that overwrite the latest change, however, DenoDB also supports MongoDB
which doesn't have transactions the way RDBMSs do.
So, using a transaction would basically drop support for that connector:
Transactions are supported on MySQL, MariaDB, SQLite and Postgres (not MongoDB).
(from here)
I don't really want to add conditional statements to change behavior based on the connector type since the whole point of an ORM is to abstract these details. But up to you if you still think I should make changes I will.
I don't really have strong opinions on what implementation to choose, that's up to you. I'm rather striving for correctness. Not using a transaction in the way you propose may cause data loss (if the user makes mistakes in configuring their middleware). I agree that platform-specific code is not optimal, but on the other hand a quick branching for Mongo that uses UPSERT, and a transaction otherwise still sounds pretty feasible to me.
Any updates on this?
I've been digging into denodb and am not very happy with what I've found so far:
DataTypes.JSON
returns the parsed object for Postgres and
MongoDB but string
for MySQL and
Sqlite`.DataTypes.FLOAT
returns number on all connectors except Postgres
where it returns a string.DataTypes.BOOLEAN
returns bool on MongoDB
and Postgres
but number (0,1
) on others.DataTypes.DATE
which doesn't have hours/min/sec is sensitive to timezones! if on UTC+X it may store the previous day.DataTypes.DATETIME
is inconsistent. On MySQL it returns what you gave it as-is, On PostgreSQL, it converts to UTC and returns a timezone aware date. In Sqlite it converts to UTC, then to string, but when reading it reads the string in local time (basically shifting it by timezone every time you read/write it).2n
) when inserting, yet when you read a DataTypes.BIG_INTEGER
it gives you a BigInt
.DataTypes.DECIMAL
by providing a string, basically impossible to store values that cannot be represented by JS's number.Blob
for DataTypes.BINARY
.sync
feature cannot add new columns without dropping the table first.null
in it Errors but there is no test case for anything other than varchar/number types even for other connectors
TypeError: Cannot read properties of null (reading '$date')
The point being, unless I end up forking denodb and sinking some time into it or the original author finding time and motivation to start maintaining it again (even if just reviewing/accepting pull requests) I wouldn't want to use it on my own projects.
Until denodb becomes more production-ready (or I get over my head and attempt to maintain a fork of it) I will probably switch back to nodejs instead of deno and would lack the motivation to maintain a library not used in my own projects.
That frustration is understandable. Deno is quite new and the community has yet to grow. ORMs are a very hard problem to tackle, so in combination with poor maintenance, this can be tiring.
I do not think that any of the described issues are a problem for grammY—after all, all we ever need is a simple table that can store string values for string identifiers. I assume denodb to be that usable.
I cannot blame you if you want to drop working on this. This is especially true if you plan on delaying your migrations to Deno for some more time. On the other hand, a storage adapter should not need much maintenance once it's complete. denodb will probably not change the syntax of primitive CRUD statements, and the storage adapter interface will not change, either. In other words: if you decide to get it done, you can drop support for it immediately after, and other people will take care of it :)
Also, consider opening an issue (or ten) with your findings at the denodb repo.
There are currently a few things worth improving about the implementations of both
write
anddelete
.I suggest to change
delete
to.where('key', key).delete();
(correct?) which should do the DELETE in a single statement.I further suggest to use a transaction inside the write operation. https://github.com/eveningkid/denodb/issues/211 suggests that UPSERT operations are not supported by DenoDB, so I don't think we can implement this as easily as on other platforms.