timshannon / badgerhold

BadgerHold is an embeddable NoSQL store for querying Go types built on Badger
MIT License
515 stars 52 forks source link

Query semantics (limits and empty query construction) #38

Closed Ale1ster closed 3 years ago

Ale1ster commented 3 years ago

Since both Query.Limit and Query.Skip should not accept negative values, why not simply change the argument type to uint or one of its flavours? That would be one less error to worry about, and one less type conversion for users with unsigned semantics for limit and offset.

Also, when using badgerhold.Find(&result, nil), is the returned order the struct key field order?

I would like to create a query to iterate over all stored entities in key order and retrieve some of them with a specified limit and offset. For that I have resorted to the following:

query := (&badgerhold.Query{}).Skip(int(idx)).Limit(int(page))
err := badgerhold.Find(&result, query)

Is there something I am missing with regards to empty query construction and default order, or is this functionality not in scope?

timshannon commented 3 years ago

So this issue is actually three separate issues, which can make it harder to track and respond to.

For int vs unit, I looked at the standard library and the database / sql package, as well as another popular library in this same space (https://github.com/asdine/storm#skip-limit-and-reverse) to try to see how they were handling variables like this. But either way, 1) I'm not breaking backwards compatibility to save on typing a little more, and 2) it's a simple and no-cost conversion.

As for empty queries, what you're doing is fine, and once again this is similar behavior to the standard library (look at creating an "empty" http server or client).

As to default order of values, the default order is defined by the order of the keys, like any other database or KV store, and this is determined by Badger as well as determined by the encoding you use to store keys and values, but in the end the order is byte order as determined by the underlying Badger DB.