Open deakjahn opened 4 years ago
Thanks for the report. That's bad indeed and so far has not been reported.
2 things you can try:
Can you await the delete operation:
Countries.delete(txn);
change to
await Countries.delete(txn);
Weird that this should have shown a lint warning in pedantic mode (or a runtime crash in debug mode). It seems you are catching the exception (any logs here to see if any error occurs?) in the transaction. Be aware that this means that the transaction can succeed partially, i.e. whatever has ran inside the transaction until an exception occurs.
Try sembast_sqflite.
sqflite is used as a journal database with less compacting issue to deal with.
Such issues are hard to unit test. I will try though to reproduce
Oh, dear me, I should've spotted those missing awaits. Yes, very very likely to cause issues, and exactly these kind of non-reproducible ones. Although I've already moved a bit away from that code to combat the performance issues, I'll restore it temporarily to see if it solves the problem.
The catches are there from the Sqflite era where they simply warned me with a "Hot reload?" debug print (apart from logging the error, of course) because I seem to recall that I had some issues with hot reload and simply wanted to warn myself in debug mode that I need to restart the app instead. But they didn't get triggered at all. If the transaction can't throw an exception normally, I don't want to stick to them at all.
What I did to reorganize was to separate my data into two databases, this means that instead of deleting, I can simply drop it with DatabaseMode.empty
when refreshing. And, of course, this will also help me with startup time as I made the database needed for the actual startup as small as possible. This wasn't a pressing issue with Sqflite, of course.
However, with DatabaseMode.empty
I still see increasing database size and very long compacting transactions. Is this normal? I kinda expected it to present me with an empty database that I can fill without fearing compact retributions. :-)
The awaits were very much required for sure but I don't think that solved the issue completely:
the following ones are already the same size. However, if I compare them (after sorting in a text editor to account for the different order, of course), they are only equal in size, not in contents. The database is not full in either case, just different items are missing in the different runs.
I add a counter to my code to see how many records I am supposed to write, actually.
If the transaction can't throw an exception normally, I don't want to stick to them at all.
That is why you should not catch exception inside a transaction unless needed. Or rethrow it and catch the exception around the transaction to know it has failed.
instead of deleting, I can simply drop it with DatabaseMode.empty when refreshing
Do you close the previous instance properly before dropping the database and opening in empty mode? There is always a risk of having pending actions on the previous database that could mess up the database.
I still see increasing database size and very long compacting transactions. Is this normal?
No. Only delete and update should trigger compacting. Just adding should be fine.
Good if you manage to reproduce your issues in a unit test, not always easy though.
Do you close the previous instance properly before dropping the database
Probably not, this is what I thought in the meantime as an error, too, but now I reverted to the previous code (one database, no empty
to see if I can pin down or eliminate the problem altogether, not worrying about the transaction performance for now).
You were right about pedantic, of course, although I had my own analysis_options.yaml
, it wasn't using pedantic.
So, for now, I go on adding a counter and seeing if I get actually that much records as I'm supposed to put in.
Retracting what I said earlier. Stupid of me. The differences are due to the few items that were added
rather than put
. Keys aren't stable then. Sorry. I'll go back to the empty
mode and test it, too.
So, in the end, it was basically my mistake of not using await
but I don't know how you can make this foolproof. Pedantic is a hit or miss, after all.
@alextekartik Alex, one more issue but if it's known, I don't want to report it separately. Is it a known limitation that await _dbFactory.openDatabase()
(web factory) never returns after a hot reload? I don't have the slightest problem with it on first run but after the first reload, the next line will never be reached. No log, nothing, just stops in the tracks.
I don't want to report it separately
@deakjahn I've tracked the issue here: #165
I have a not loo large database, about one megabyte, around 2700 records in 12 stores. When I need to refresh the contents from the net, I use a transaction and delete most of the stores (only a few items remain) and then fill them up again.
plus a few more, of the same pattern.
So much deletion and insertion inevitably leads to a compact operation as the transaction finishes. Although this introduces some performance issues, the more pressing one seems to be that I lose data during the compacting. I use the same data source every time, and I make sure using
print()
counts that the data I push into the database is the same for sure. But the resulting database will not be the same. It varies in byte size and row size quite considerably in repeated tests. It's mostly similar to a flushing problem at the end of writing the new file: records that happen to be written at the end of the database are either there or not, in a completely random way. But I spotted a few missing records in other places as well.A sample of four consecutive runs: