Open flaviut opened 11 months ago
I've updated the above with some more data.
NOTE: this time is compression time, not decompression time
seems to me like zstd with 64KiB dict & 8K pages is likely the best tradeoff here. zstd has good decompression speed across a range of compression levels: https://github.com/facebook/zstd#benchmarks
Thank you for your input. I really appreciate it. I thought about using SQLite for a while, but I never found time (and motivation) to do it. Let me share my findings:
The usage of the site will remain roughly the same - the user will be notified about new database which they can fetch
I find this part of the site particularly frustrating. I don't like having to constantly refresh all the data and have to think about that, I'd rather that just get handled seamlessly in the background.
most users use the full-text feature. It is very easy to run into one of the worst-case complexity scenarios that leads to fetching a non-trivial part of the database.
Interesting. The full-text feature definitely needs to be investigated further. I don't know about SQLite, but with Postgresql, there's things like covering indexes and index-only scans. I wonder if it'd be possible to get that in SQLite FTS.
I'm not familiar with the existing code, but there's also plenty of knobs around tokenizing the query, stop words, throttling delay, limiting result counts, etc.
If you have your old notes, I'd be interested in reading them.
Thus, I think a better solution is to store a compressed database in local storage and use SQLite over a compressed filesystem. The fetching will be fairly quick as we need to just download the image and do nothing else (96% of the current update time is the update of entries in IndexedDB).
Not opposed to this, but I don't see why it couldn't be both: fetch chunks only as they are needed, and cache them locally. In fact, I wonder if web browsers already handle caching internally these days.
next, I was thinking about a suitable DB schema. Careful design can save a lot of DB size. Meanwhile representation of the categories is trivial, I struggled with a suitable design for attributes as they are non-uniform.
I stuck with the existing IndexedDB schema design in my testing. But yes, there is plenty to look at in terms of structuring the data so it can be queried easily.
I want to connect this huge change with a rewrite of the frontend into React functional components and refactor it.
I've often felt this same temptation to combine two major changes into one :smiley:. I've found it ends up more motivating and more efficient for me to do changes in smaller chunks, even if at times it seems like I'm doing work that I will soon replace.
There's many config options for full-text-search with sqlite. I don't remember if I wrote my research about this down somewhere but you can reduce the FTS size by >90% by setting detail=none and contentless (https://www.sqlite.org/fts5.html#the_detail_option). It does reduce the power of the queries you can do a lot though.
Also note that sqlite is very easy to compile and there's also drop in python packages to get a newer and more complete sqlite into python.
There's one powerful statically hosted full text search engine that scales to a terabyte of data that i know of called summa, but for an index with a compressed size of 50MB it's probably not worth it.
There's a few other JS libraries that allow creating a keyword / full text search index that you serialize to JSON that you fully download (with a smaller size) and you could then use in combination with SQLite or something else to fetch the full data dynamically. One example (I think) is https://github.com/nextapps-de/flexsearch
Also just as a note if you have too much free time: If you download the whole DB then you can alternatively also create a minimal db without indexes and without FTS, download that, and create the indexes and FTS search locally. Trading bandwidth for local compute.
The intent here is to use this with https://github.com/phiresky/sql.js-httpvfs. There are major missing pieces:
Quick benchmarks:
note: not super helpful in the CPU & time area because I have not tested read performance.
See https://github.com/yaqwsx/jlcparts/issues/37