Closed zachdaniel closed 2 years ago
So for Alpha - sounds like we must have #2 done. What about #3, and #4?
Also for #1, that looks really cool. Wondering if there are benefits from that - or if it would just be used for very custom SQL that doesn't fit into what's possible with #2?
There are benefits to #1 over #2, as for #1 they would be sortable/filterable. There are drawbacks though, as you could potentially make your queries slower for something you could have done more easily in code. However, #3 is designed to be an abstract representation that can be transformed into #1 or #2 on demand, so the framework can decide.
Also: I just realized that the spec for #2 should be something that takes a list of records and returns a list of values.
As for which are necessary: I want to add a very limited form of 3/4, even if it’s not fleshed out to be useful, to make sure that the rest of the pipeline is ready for it. So we’d really want all of them before alpha, but for 3/4 it would only be the guts.
This has been broken out into aggregates and calculations, and aggregates have been completed (roughly)
The initial functional portion of calculations has been completed.
This is all now supported.
The default is for this information to not be populated, and to be populated on demand. So in the same vein of
%Ecto.Association.NotLoaded{}
we're going to want%Ash.Attribute.NotLoaded{}
(and frankly we may want to create our own%Ash.Relationship.NotLoaded{}
). The query will support specifying a list of calculated fields to load. Additionally, we will want a way to load attributes after the fact, via aMyApi.load_attributes(records, [:full_name])
.We will most likely also need to have a configurable option to load some fields by default (
full_name
, for example, is very cheap to load).There are four scenarios we would eventually need to support:
1.) embedding statements into the query. This will need to be implemented as a data layer feature. These would be useable in filters/sorts, and would generally be This is out of scope for this issue, but you might see something like:
2.) Something derived functionally, receiving a resource and returning a value. This would be the first one we implement and would be quite easy to do.
3.) An intermediate representation that each datalayer will have some amount of support for. Ash framework will use this to make smart optimizations about how to generate metadata. For instance, if a value is used in filters/sorts, it will pass it to the data layer to attach to the query. Otherwise, the engine can generate it (in parallel with all the other calculated attributes that weren't needed in the query) after the query is run. If relationships are referenced in the attribute, all attributes that require related values can be batched together to ensure that no unnecessary data is queries and to mitigate the expense of joining/fetching related data.
This would look like elixir code, but it would only support a specific set of syntax, and there would be special bound variables available (just
record
to start). The data layer would express which expressions it supports as well.4.) Inverting this by allow it to be specified in the query.
This would have the added benefits of allowing the front end extensions to define their own method of passing this information in. For example, in JSON:API, we could support
calculate[user][comment_count][type]=count&calculate[user][comment_count][relationship]=record.comments&calculate[user][comment_count][filter][archived]=false
. We'd have to properly escape it, of course. That's ugly, but you'd typically just URL encode a JSON object to make a query like that, like so: