lucasconstantino / graphql-resolvers

:electric_plug: Resolver composition library for GraphQL.
MIT License
282 stars 16 forks source link

running "afterware" #7

Closed smeijer closed 5 years ago

smeijer commented 6 years ago

With combine we can add handlers to run before the main resolver. Useful for authentication checks and such.

I think it would be nice if we could also run some handlers after the main resolver. Think about credit transactions.

combineResolvers(reserveFunds, performAction, chargeFunds)
smeijer commented 6 years ago

Been thinking a bit more about this. Perhaps I'm more looking for a compose method.

const charge = amount => async (resolver, parent, args, context, ast) => {
  const transactionId = context.actions.charge({
    userId: context.userId,
    amount: 10
  });

  const result = await resolver();

  if (result.errors) {
    context.actions.rollback(transactionId);
  }

  return result;
};

const downloadTemplate = (parent, args, context, ast) => {
  return context.db.templates.find({ _id: args._id });
};

const resolvers = {
  Mutation: {
    downloadTemplate: compose(
      charge(10),
      downloadTemplate
    )
  }
};

The nice thing is, in this way we can also create separate methods for adding data to the result. For example:

const addAuthorToBlog = async (resolver, parent, args, context, ast) => {
  const result = await resolver();

  if (!Array.isArray(result.data) || result.data[0].__typename !== "Blog") {
    return result;
  }

  const userIds = result.data.map(x => x.authorId);
  const users = context.db.users.find({ _id: { $in: userIds } }).toArray();
  const indexed = users.reduce((l, r) => Object.assign({}, l, { [r._id]: r }));

  result.data = result.data.map(x =>
    Object.assign(x, { author: indexed[x.authorId] })
  );

  return result;
};

const getBlogs = (parent, args, context, ast) => {
  return context.db.blogs.find({ _id: args._id });
};

const resolvers = {
  Mutation: {
    getBlogs: compose(
      addAuthorToBlog,
      getBlogs
    )
  }
};

The first thought would be that property resolvers cover this. But they don't, as you then get a query per blog. With a compose method, we can run a single query for all blogs.

lucasconstantino commented 6 years ago

Ok, this is quite a nice idea. Lets just split the discussion a bit:

About being able to have a afterware kind of behavior, I think it is totally feasible and should be quite simple to implement as a auxiliar function. I wouldn't call it compose, though, for it would conflict the meaning with all common implementation of functional composition, which is to simply apply the result of the rightmost function call to the next before it till it finishes. Perhaps we can come up with a more safe name. Does chain sound good? I'm not sure...

The second point, though, correct me if I'm wrong, I think it's breaking the fundamental principle of single responsibility, which GraphQL promotes. Basically, you are not supposed to calculate things that might actually not be requested in the result. For instance, in this example you gave you are loading the author information inside a (multiple) Blog type return; you should delegate that to a Blog.author resolver.

You might argue that this way you've done it can perform better due to the one call to the database only, but that should be addressed by a data-loader logic. If you are not familiar with that, I would recommend having a look at Facebook's dataloader project.

Nonetheless, I think your idea of a more controlable flow for side effects (such as charging) sounds great and is worth exploring, though your current implementation of it could be addressed using the pipe and allResolvers methods already provided in this lib:

const charge = amount => (root, args, { actions, userId }) =>
  actions.charge({ userId, amount })

const loadTemplate = (root, { _id }, { db }, ast) =>
  db.templates.find({ _id })

const rollbackOnError = ([transation, result], args, { actions }) => {
  if (result.errors) {
    actions.rollback(transation)
  }

  return result
}

const resolvers = {
  Mutation: {
    downloadTemplate: pipeResolvers(
      allResolvers(
        charge(10),
        loadTemplate,
      ),
      rollbackOnError,
    )
  }
}
lucasconstantino commented 6 years ago

Also, an alternative and more compliant to single-purpose can be done using some Ramda helpers:

import { tap, nth } from 'ramda'

const charge = amount => (root, args, { actions, userId }) =>
  actions.charge({ userId, amount })

const loadTemplate = (root, { _id }, { db }) =>
  db.templates.find({ _id })

const rollbackOnError = ([transation, result], args, { actions }) => {
  if (result.errors) {
    actions.rollback(transation)
  }
}

const resolvers = {
  Mutation: {
    downloadTemplate: pipeResolvers(
      allResolvers(
        charge(10),
        loadTemplate,
      ),
      tap(rollbackOnError), // works on joined results as side-effect
      nth(1), // retrieves result at the end
    )
  }
}
smeijer commented 6 years ago

Thanks for the detailed responses. What you're writing makes sense, and sounds like great solutions.

The last example with help of ramda is actually looking pretty nice. Maybe we can wrap it with some synthetic sugar, but that's that's a detail.

I'm curious though, the rollbackOnError, does just receive the results from both resolvers? So there is no way to catch an error that's been thrown in loadTemplate? Thereby there is a result available, but not the result.data and result.error we usually see on the client?

Piping information is a start, but I think being able to catch and handle errors, is where the power lies.

lucasconstantino commented 6 years ago

I got you're point. Indeed for that kind of need we better create a new helper function. It would be quite easy to implement- I'll see if I can do it later ;)

smeijer commented 6 years ago

Awesome.

Also, regarding dataloader. I am familiar with it, but it isn't as effective as applying the merging logic like above.

Dataloader caches in a way that it protects you from requesting the same record twice. If you would have 3 Blogs, with 3 different Authors, dataloader would still query the database 3 times to retrieve the Authors.

Query: {
  Blog: {
    author: (b, _, { authorLoader }) => authorLoader.load(b.authorId),
  },

  blogs: () => ([
    { _id: 1, authorId: 1, title: 'Blog 1' },
    { _id: 2, authorId: 2, title: 'Blog 2' },
    { _id: 3, authorId: 3, title: 'Blog 3' },
  ]),
}

If querying blogs here, Query.Blog.author would resolve 3 times, and thereby still hit the database 3 times. Unless I'm really missing something huge here.

lucasconstantino commented 5 years ago

@smeijer I honestly forgot to answer you back then. Facebook's data-loader not only facilitates caching, but batching. You should have a deeper look at that.

smeijer commented 5 years ago

@lucasconstantino, I'm aware of batching, but it's not the holy grail either. It doesn't combine requests from multiple child resolvers

query {
  blog(id: "x") {
    author {
      name
    }
    publisher {
      name
    }
  }
}

When authorId and publisherId are not equal, this will still hit the database three times. While we could optimize it, by fetching in the blog resolver:

  Query: {
    blog: async (_, { id }, { db }) => {
      const blog = await db.blogs.findOne({ _id: id });

      const users = await db.users.find(
        { _id: { $in: [blog.authorId, blog.publisherId] }}
      ).toArray();

      blog.author = users.find(x => x._id === blog.authorId);
      blog.publisher = users.find(x => x._id === blog.publisherId);

      return blog;
    }
  }

This is an oversimplified example of course, and in this specific case, I even might recommend against it. But taking the blog's code from before:

query {
  blogs(first: 100) {
    author {
      name
    }
  }
}

If the first 100 blogs are made by 100 different authors. This would still result in 101 queries hitting the database. Even when using dataloader with it's batching implementation. Unless I missed something huge in dataloader, it only protects you from requesting duplicate id's. So when there are 25 different authors in this first 100 blogs, the database will only be hit 26 times. While we could reduce the count to 2 hits when we would iterate over the blogs our selves.