I would like to propose a declarative high-level API for the Dataloader.
Some context:
Over the last few years of using the Dataloader I came up with a high-level declarative Dataloader API, which makes working with the Dataloader much easier, and can be used to build complex pipelines for fetching data with Elixir (with dependencies between steps).
I think that this is something that could be very valuable to the Elixir community. I'm wondering if this is something that could be merged in to the Dataloader itself?
I don't claim that this covers every single possible Dataloader use case, but I think it should cover 95% of use cases (it supports every single use case I had over the years).
How does this look like?
A simple example
def blog_posts(_source, %{id: blog_id}, resolution) do
resolution.context.loader
|> superload(
steps: [
# Initialize the superloader state with the blog id
init: %{blog_id: blog_id},
# Fetch the blog post using the blog id
fetch: [blog_post: &blog_post_fetcher/1],
# Return the fetched blog post
return: :blog_post
]
)
end
Which can be further simplified to:
def blog_posts(_source, %{id: blog_id}, resolution) do
resolution |> superload(%{blog_id: blog_id}, &blog_post_fetcher/1)
end
Fetchers are plain functions that take in the current superloader state, and return a "fetch tuple" (an id to fetch, along with the Ecto query to run).
def blog_post_fetcher(%{blog_id: blog_id} = _superloader_state) do
{
# The id to fetch
blog_id,
# A function that actually fetches the data
fn ids -> Repo.all(from p in Post, where: p.id in ^ids, select: {p.id, p}) end,
}
end
Currently the implementation expects the selected value to always have the shape of {id, value}.
A more complex example
This is a contrived example, but I think it does a good job demonstrating the capabilities of the superloader.
The pipeline API can be used to fetch data with all sorts of dependencies. In the example below, we're fetching a blog post, along with the author of the blog post, and the comments on the blog post. The result will then be combined into a single blog_details map, which will be returned by the Absinthe resolver.
def blog_details(_source, %{id: blog_id}, resolution) do
user_id = get_user_id(resolution)
resolution
|> superload(
steps: [
# Initialize the state with the blog id
init: %{blog_id: blog_id, user_id: user_id},
# Fetch the blog post and the user (in parallel)
fetch: [blog_post: &blog_post_fetcher/1, user: &user_fetcher/1],
# Fetch the author details and the comments (in parallel)
fetch: [author: &author_fetcher/1, comments: &comments_fetcher/1],
# Fetch the authors of the fetched comments
fetch: [comment_authors: &comment_authors_fetcher/1],
# Combine all information to build final blog details
then: [blog_details: &build_blog_details/1],
# Return the final blog details
return: :blog_details
]
)
end
The current implementation is almost as I want it to be in the examples above (I'll need maybe a week of work to get it into proper shape).
I would like to propose a declarative high-level API for the Dataloader.
Some context:
Over the last few years of using the Dataloader I came up with a high-level declarative Dataloader API, which makes working with the Dataloader much easier, and can be used to build complex pipelines for fetching data with Elixir (with dependencies between steps).
I think that this is something that could be very valuable to the Elixir community. I'm wondering if this is something that could be merged in to the Dataloader itself?
I don't claim that this covers every single possible Dataloader use case, but I think it should cover 95% of use cases (it supports every single use case I had over the years).
How does this look like?
A simple example
Which can be further simplified to:
Fetchers are plain functions that take in the current superloader state, and return a "fetch tuple" (an id to fetch, along with the Ecto query to run).
Currently the implementation expects the selected value to always have the shape of
{id, value}
.A more complex example
This is a contrived example, but I think it does a good job demonstrating the capabilities of the superloader.
The pipeline API can be used to fetch data with all sorts of dependencies. In the example below, we're fetching a blog post, along with the author of the blog post, and the comments on the blog post. The result will then be combined into a single
blog_details
map, which will be returned by the Absinthe resolver.The current implementation is almost as I want it to be in the examples above (I'll need maybe a week of work to get it into proper shape).
Let me know your thoughts!