fastify / fastify-rate-limit

A low overhead rate limiter for your routes
MIT License
477 stars 66 forks source link

Add parser option #346

Open Kimblis opened 8 months ago

Kimblis commented 8 months ago

Prerequisites

πŸš€ Feature Proposal

If you use fastify with trpc you most likely use some kind of parser. We are using fastify + trpc + zod as a parser. There is an option for response errorResponseBuilder but it expect an object. So we can't fully specify what we return (in our case it should be either TrpcError or stringified json for zod), so that's the only reason we cant currently use this package. (Unless you can think of an easier solution for this, would be very helpful). It would be nice to have a parser option which would apply specified parser to the response (you can include just a couple, the most popular ones)

Motivation

No response

Example

No response

mcollina commented 8 months ago

I don't understand what parser are you referring to. If you would like to add an option so you can plug in some trpc logic, I'm +1, bu I don't think it belongs to this module.

gurgunday commented 8 months ago

I don’t get it either, can you please elaborate? Thanks!

Charioteer commented 1 day ago

Hi @mcollina and @gurgunday,

I just stumbled accross this issue having a similar one. I am also using trpc with fastify and @fastify/rate-limit to rate limit my tRPC prodcedures (as well as REST API endpoints, more on this later). What I am going to explain in detail now is what I think @Kimblis wanted to ask about initially.

Whenever a tRPC client calls a tRPC procedure "protected" by @fastify/rate-limit and, at some point, reaches the configured rate limit, @fastify/rate-limit returns a JSON error response as usual. Unfortunately, this leads to the following tRPC error on the client: TRPCClientError: Unable to transform response from server. A tRPC client expects errors created with TRPCError. In a best case scenario, one would never return any fastify-related errors in a tRPC procedure (i.e. plain text/JSON reponses) but only TRPCErrors to avoid this exact issue.

Usually, a rate limiter for tRPC would be implemented by using middlewares. This way, we could throw a TRPCError with the proper TOO_MANY_REQUESTS code. As a side effect, we would also be able to granularly setup rate limiting on a per-procedure level when using middlewares. It would be great if @fastify/rate-limit would expose some method to call inside a middleware to check whether a given IP (or any key as returned by keyGenerator) already reached its rate limit. This is some rough example for what I would love to be able to do:

trpc.middleware(async ({ ctx, next }) => {
  const { 
    limit, // nice to have
    remaining, // nice to have,
    reset, // nice to have
    isLimited, // not required, could be determined by e.g. if(remaining <= 0) { ... }
  } = await ctx.fastify.checkRateLimit(ctx.req.ip) // of course, I would need to pass down this method as tRPC context as well as req.ip; see https://trpc.io/docs/server/context

  if (isLimited) {
    throw new TRPCError({
      code: 'TOO_MANY_REQUESTS'
    })
  }

  return next()
})

This way, it would be possible to throw proper TRPC errors when using @fastify/rate-limit. I know that you already decorate the fastify instance with a rateLimit function. However, it is designed to be used as a fastify hook. One could call it even in a tRPC middleware by passing in the Requestand Response objects. However, this requires to pass the whole req and res objects into the tRPC context which is often not desired (little performance impact, could lead to slower type inference -> suboptimal DX). Also, it just feels "hacky" πŸ˜„. With a dedicated method to manually check the rate limit, we could just pass down what's needed (i.e. the IP address or other data used by the key generator).

You may ask "why don't you use a tRPC rate limiter like this instead?". However, I run a REST API and tRPC API in parrallel with fastify. It would be great to use the same rate limit plugin accross all APIs with the same library. I don't want to introduce additional dependencies doing the same for different scopes.

I hope I was able to explain the issue. Sorry for the long message. If I can help you further or give more details, let me know.

Thank you and cheers!

mcollina commented 1 day ago

@Charioteer would you like to send a PR?

Charioteer commented 13 hours ago

@mcollina Sure, I can implement my suggestion. I only have some spare time on weekends, so it might take a while.