Open TMomemt opened 2 years ago
https://github.com/Nozbe/WatermelonDB/blob/master/src/adapters/sqlite/index.js#L344
you can use this for running a raw query on the underlying database. However, you must be very careful as, by definition, a raw SQL query cannot know about in-memory JS caches. So you must run the query and simultaneously update all models in cache so they're consistent with the DB state.
@radex May I ask if this (your last comment, using SQLiteAdapter.unsafeExecute
?) is something you do sometimes at Nozbe or this is something you avoid at all cost / you did not need it? Just to have a better perspective before choosing (against?) it.
I'm interesting to understand @PEZO19 answer as well. I'm surprise that a query like UPDATE table SET send = 1 WHERE send = 0
needs such a "unsafe" workaround to be able to be applied on multiple records.
Hi, first of all thank you for your work on this repo.
Background
My app has a message box which contains a "Read All" button. Following sample code will called when this button press.
When the total amount of data exceeds 10,000, I find that the performance of the above code is a little bad.
Opt
After debugging I find most of time was used to query data, JSON encode \ decode. In my business the operators are meaningless. So I try to use
unsafeSqlQuery
API, likeupdate message set readed=1 where _status is not 'deleted'
.Directly pass SQL to native db make a so high performance, setting over 20,000 datas read only cost 1% duration than use standard APIs.
But
unsafeSqlQuery
API only update datas in database, can not update the cached models in collection's record cache map. this lead to use standard API likequery
will get the before-update model if it has been cached.At the same time I also tried update records in memory after executing SQL. But this maybe a little trick and make code more complex like following
Question
So I want find there is a way to update entire table without querying it? Or a more elegant and safe way to update memory cached models?
Waiting for your reply, thanks!