Closed cesardevera closed 7 years ago
I've been thinking about this for a bit, and haven't found a performant way to handle this without having to read the entire dataset.
The data is naturally sorted by the Key value, so the best performing way to handle what you want is to use the post's date as your key value. Make sure to use a lexigraphically sortable date format like for your date / key like RFC3339, that way everything that you pull out, whether limited or skipped, is automatically in the right order.
I've added a separate issue to track adding this as an enhancement.
what if there was a way to query by the indexes? in the docs we see:
type Person struct {
Name string
Division string `boltholdIndex:"Division"`
}
what if we could do something like:
store.findByIndex("Division", bolthold.Where.....)
this way, we could create artificial fields and index them according to the queries we plan to do.
I've found a project similar to Bolthold named Storm (https://github.com/asdine/storm) that uses this approach, like db.AllByIndex("CreatedAt", &users)
, but which does not have the Aggregate Queries, sub-queries, MatchFunc, or insert/update/upsert ready to use.
Bolthold seems a more complete solution, but I really miss the sorting/order by
Right now, if the field put in the Where
function is indexed, it'll use that index. I wanted to make index selection implied because eventually I planned on having a fairly simple query optimizer that chose the best starting index based on it's uniqueness (i.e. will do the least number of reads).
I found query.skip() and query.limit() methods, but is there any way to sort the results?
let's say I have a blog and I want to show the 10 most recent posts matching some tags. should I recover all posts and sort them? this would not defeat the purpose of skip and limit (at least for pagination)?
any suggestion?