Closed jcw- closed 2 years ago
Hey, thanks for the great question. I'll share my thoughts on each point:
- I'm assuming that returning an ActiveRecord object / association in a resolver has an advantage in terms of deferring query execution, since the next nested field receives it as object and can continue to chain?
In general, it's true that the returned object is passed to "child" fields for resolving values. In practice, I don't know of any cases where ActiveRecord objects are passed to child fields without hitting the database (or cache). For example, when it comes to single objects, fields usually load the object before returning it (eg Thing.find(id)
, or a batch-loaded equivalent). For relations, they're usually exposed as connections, and connections actually perform a database load before enumerating over the list to resolve child fields.
That said, it is possible to pass AR::Relations to child fields for continued chaining -- I just haven't seen it done. (The only example I can think of is if, for some reason, the application returned an ActiveRecord::Relation
from a field that returned some Object type (not a list or connection) -- I just can't think why that'd be useful!)
- Would this make paging (relay connection_type) really inefficient?
It depends on the the implementation. The connections in GraphQL-Ruby receive a list object (ActiveRecord::Relation, Sequel::Dataset, or Array), then perform some modifications and/or calculations to find the subset of that list to return in the current query. If you returned application-defined objects instead, you'd have to write another connection wrapper class (doc) to translate GraphQL arguments into pagination parameters for the database, then load the subset of items for continuing in GraphQL.
- I was also thinking about N+1 protection using graphql-batch, which my understanding is that it is based on the preload capability of ActiveRecord.
GraphQL-Batch (or GraphQL::Dataloader) works well with ActiveRecord right out of the box, but adding an application layer in between wouldn't be a deal-breaker. You could either:
Write batch loaders that accept your POROs as input (instead of ActiveRecord objects). For example, a GraphQL::Dataloader source that uses an imaginary application layer:
# @example batch-loading by ID
# dataloader.with(RecordSource, MyApp::Thing).load(thing_id)
class RecordSource < GraphQL::Dataloader::Source
def initialize(record_class)
@record_class = record_class
end
def fetch(ids)
# Assuming the application layer has some method for loading several objects at once:
@record_class.fetch_all(ids)
end
end
In that case, the Dataloader::Source doesn't really care what the record_class
is, as long as it implements the expected methods (.fetch_all
in the example above). I think a Batch::Loader could be written the same way.
- I was also wondering about impact to ability to use Stable Cursors
The GraphQL-Pro stable cursor code requires an ActiveRecord::Relation to be returned from resolvers. I can't support a custom implementation, but you're certainly welcome to peruse the GraphQL-Pro implementation for inspiration! A lot of the code is general-purpose-ish (for example, parsing relation.to_sql
to understand how it's been sorted), but I'd recommend porting the code into your app rather than calling those private bits of code directly, since the private concerns of GraphQL-Pro are likely to change over time.
I hope that helps a bit! Let me know if there's anything further I can discuss on those topics or others.
Let me know if you give it a shot and run into any more questions!
I'm curious what the impact (in terms of functionality or features lost) would be if resolvers are decoupled from ActiveRecord using POROs (returned from a service class):
object
and can continue to chain?graphql-batch
, which my understanding is that it is based on thepreload
capability of ActiveRecord.