vercel / micro

Asynchronous HTTP microservices
MIT License
10.58k stars 459 forks source link

RFC: Micro Response Type #369

Closed rauchg closed 4 years ago

rauchg commented 6 years ago

The idea is to introduce an "envelope"-type construct into micro, that would allow us to express a broader range of HTTP responses as a simple return statement.

It restores a concept absent in the core Node.js HTTP API: a request comes in (function argument), a response goes out (return).

Summary

Examples

Respond with 404

const res = require('micro/res')
module.exports = req => {
  return res('Not Found', 404)
}

Respond with 204

const res = require('micro/res')
module.exports = req => {
  return res(null)
}

Respond with a JSON body

const res = require('micro/res')
module.exports = req => {
  return res({ "hi": "there" })
}

Respond with a stream

const res = require('micro/res')
module.exports = req => {
  return res(fs.createReadStream('./photo.jpg'))
}

Respond with JSON and headers

const res = require('micro/res')
module.exports = req => {
  return res({ error: "not_found" }, 404, {
     'Access-Allow-Control-Origin': '*'
  })
}

Fun Trivia: Symmetry

If we avoid { and }, it almost looks like we're using the imperative res (similar to our current send):

const res = require('micro/res')
module.exports = req => (
  await sideEffect(),
  res('Your request has been logged', 202)
)

Composition

The return value from res(200) would return a Response object that works like an immutable-js Record.

I've written before about why middleware as a framework construct is a bad idea, because functional programming is the ultimate middleware framework. Our language already has all the tools we need.

So let's imagine we want to create "middleware" that always sends CORS headers to all your responses. The end goal would be to use it as follows:

module.exports = withCORS(req => 
  res(200, 'Hi there')
)

So, how do we write withCORS?

async function withCORS(fn) {
  const res_ = await fn()
  return res_.setHeaders({
    'Access-Control-Allow-Origin': '*'
  })
}

Which can actually be simplified as

async function withCORS(fn) {
  return (await fn()).setHeaders({
    'Access-Control-Allow-Origin': '*'
  })
}

We would have a few methods on Response, that just return a new immutable response:

Questions

What happens to our other return types?

Undecided and open for discussion. Currently we support returning:

These cover a wide spectrum, and they're ok if you want to retain the default headers / status code we chose.

There's a strong case for taking them away and always enforcing a Response type:

However, if we do want to main them, we could use a wrapper like res.from() that gives us a Response type from a primitive like "Hello World":

async function withCors(fn) {
  const res_ = res.from(fn())
  return res_.setHeaders({
    'Access-Control-Allow-Origin': '*'
  })
}

Another counter-argument of ditching support for returning the primitives is that it would make testing also harder, by requiring wrapping again in your test suites.

What happens to the throw behavior?

Deprecated. Right now you can throw an Error with a statusCode, and we respond with it. We will be warning (once per process in prod, every time in dev) if you use it, and then remove it.

The functionality can also easily be retained in userland.

However, if a handler throws, we will still continue to respond with 500 and a generic "Internal Server Error" string, with the full stack in developer mode, and we will console.error as well.

If the user wishes to override error behavior, they can write their own function that performs try/catch.

What happens to send(res, 'Hi')?

Deprecated. We will be warning (once per process in prod, every time in dev) if you use it, and then remove it.

The functionality can also easily be retained in userland.

Transition plans

We should do our best to facilitate access to codemods and utilities (maintained in userland) that make the transition painless and as automatic as possible.

Future Directions

If Node.js brings back support for the sendfile kernel syscall, we can consider introducing a file return type, that makes a reference to a node in the filesystem and it can be streamed back in a very optimized way.

It has been suggested in the past, however, that sendfile could become an implementation detail of piping a filesystem stream, in which case this wouldn't be necessary.

Input from the community

Please feel free to weigh your opinions on this RFC. The more comments we gather, the better!

tungv commented 6 years ago

Hello @rauchg, I'm using micro to serve a text/event-stream endpoint. I do it by subscribing to a kefir observable and imperatively calling res.write on incoming events. How would it work in this new paradigm? Thanks!

morajabi commented 6 years ago

The suggested approach seems totally more natural to me. I feel better using it and looks safer. Probably because:

I'm not sure, is it bad that we can't continue doing some work after sending a response anymore?

React is a delight and one reason is because it's javascript, it always resonates with me when we get closer to the language and try to prevent making new APIs on top of it when they can be achieved the former way. 👏

amio commented 6 years ago

+1 for "single way to return a response".

is it bad that we can't continue doing some work after sending a response anymore?

That would be bad. But I think we can still use Promise to do works asynchronously, it just looks not so straightforward as

module.exports = function (req, res) {
  send(res, 200, 'hi')
  // do something
}
lukeed commented 6 years ago

Hey @rauchg, I'm still trying to fully understand your approach here. I think I am/was struggling because this is an incomplete picture — seeing the changes inside micro directly would answer some questions, at least in my mind.

My first (and current) hesitation is that you can't/shouldn't issue new ServerResponses on Node's behalf. I tried to do that at one point in Polka (and again in sirv), but that ended poorly & abruptly 😆

So, the only way I can see this being done is by passing around new MyResponse instances that get mutated (or recreated), as opposed to mutating the underlying ServerResponse directly. This is what I came up with:

class Response {
    constructor(opts) {
        this.body = null;
        this.status = opts.status;
        this.headers = new Map();

        if (opts.headers) this.setHeaders();
        if (opts.body !== void 0) this.setBody(opts.body);
    }

    setHeaders(obj) {
        for (let k in obj) {
            this.headers.set(k.toLowerCase(), obj[k]);
        }
    }

    setStatus(code) {
        this.status = code;
    }

    setBody(any) {
        if (any == null) {
            this.setStatus(204);
        // } else if ( stream | buffer | object ) {
            // set content-type && content-length headers
        }
        this.body = any; // JSON.stringify for object
    }

    end(res) {
        res.statusCode = this.status || 200;
        this.headers.forEach((val, key) => {
            res.setHeader(key, val);
        });
        res.end(this.body || '');
    }
}

module.exports = function (body, status, headers) {
    return new Response({ status, body, headers });
};

module.exports.from = function (res) {
    return new Response(res);
}

Incomplete, but still

This offloads the dynamic body-type handler from micro & the only micro-related change is to call _res.end(response), where response is the pristine, unaffected ServerResponse, so that all the values are written to it.

I'm still not sure if this is what you have in mind, but it's the only way my peanut can make sense of it... so far 🙃

paulogdm commented 6 years ago

+1.

To be clear createError should also be deprecated (of course).

I like the proposal in the way it makes clear what is res, and it is positive to have that. It also heavily discourages the use of a bad middleware ( it is so easy to modify an object that is given to me for free )

Personally I still like the createError and throw paradigm because it allows me to uniform things. Let me explain. A few of other libs are also throwing errors, some of them creates specific classes for it:

const { ValidationError } = require('objection')

const errorHandler = async (req, res, err) => {
  if (err instanceof ValidationError) {
    // if it failed against the JSONSchema
    send(res, 400, `Validation Error. ${Object.keys(err.data)}`)
  } else {
    ...
  }
}

I can also make error patterns in this way to catch a few errors (Eg: err instanceof BadParams ?). So the proposal would only change the send line. Not bad, because it makes explicit that those Error instances are abstractions of my application and "real errors" (runtime errors, etc.) are a whole different deal.

Personally I both like and dislike that we need to add const res = require('micro/res'):

I am not sure if I should add this line too, but here we go: I'm new to the Node world, and when I first started with Express I did not like the way it worked. Micro on the other side was simple, to do X just go straight to X, don't overcomplicate. It made enjoyable to develop backend again, so the text above comes from hear but it can also be a rookie one :beers:.

Thank you guys for improving micro!

ramiel commented 6 years ago

So, to understand,if the 'throw' behaviour is deprecated now the correct way would be

return res('something').setStatus(403)

?

Oh, as side note, I think the example about the middleware should be a function that return a function that accept req as parameter and then call fn with it, shouldn't it?

paulogdm commented 6 years ago

@ramiel I don't think so. It just make it clear that it is now a feature of the "userland". Erro handling like this

const handleErrors = fn => async (req, res) => {
  try {
    return await fn(req, res)
  } catch (err) {
    const statusCode = err.statusCode || 500
    const message = err.message || 'Internal Server Error'
    console.error(err)
    return send(res, statusCode, message)
  }
}

would still be a thing and we would change it to:

const res = require('micro/res')
const handleErrors = fn => async (req) => {
  try {
    return await fn(req)
  } catch (err) {
    const statusCode = err.statusCode || 500
    const message = err.message || 'Internal Server Error'
    console.error(err)
    return res(message, statusCode)
  }
}

But again, it is a specific implementation and not a enforced pattern of micro itself.

rauchg commented 6 years ago

Absolutely @paulogdm. You just wrote the userland implementation of .statusCode support I described. Pretty great!

rauchg commented 6 years ago

@ramiel for the most part, I've learned through my own usage of micro that the best way to respond with errors is indeed to return them :)

Even if you are working with third-party libraries that throw. e.g.:

const res = require('micro/res');
const json = require('micro/json');

module.exports = (req) => {
  try {
    await validateInput(await json(req));
  } catch (err) {
    return res(`Validation Failed: ${err.code}`, 400)
  }
  return res({ ok: true });
}
rauchg commented 6 years ago

The only part that is not fully clear to me is whether in the example above we should allow:

return { ok: true }

Or if we should always enforce the Response type being returned.

morajabi commented 6 years ago

@rauchg Enforcing the Response type has a good benefit. That's when you're going through the code, you can quickly identify where we are actually returning the response, or we're returning in a callback.

Although, that can be the way our eyes are used to at the moment. So by allowing to return { ok: true } it certainly can feel natural to the functional nature of micro.

Just like we return an object from a normal helper function, this function is just JavaScript and we do the HTTP magic behind the scene.

Apart from that, it's good to keep in mind, many users will use res like they're used to it:

module.exports = (req) => {
  res({ ok: true }); // forgot the return
}

We should warn the user with a nice error in development in my opinion :)

rauchg commented 6 years ago

Good point. I was thinking about throwing if you return undefined :)

Zertz commented 6 years ago

I use micro for a Slack bot and often call send(res, 200) as soon as I'm sure enough the bot can complete the request successfully. This is because Slack enforces a maximum response time and also to provide the user with a ⚡️ fast feedback, with an actual response soon to come. Request handlers often end up something like this,

function (req, res) {
  await doesThisRequestLookValid();

  send(res, 200)

  await doActualWork();
}

I like the proposed method but I'm not sure how it fits with that particular use case?

sergiodxa commented 6 years ago

@Zertz you could use try/finally.

module.exports = async req => {
  try {
    await doesThisRequestLookValid();
    return res('OK', 200);
  } finally {
    await doActualWork();
  }
}

Or create a high order function which wraps your doActualWork with doesThisRequestLookValid and send the 200 before calling doActualWork.

Zertz commented 6 years ago

That feels slightly awkward at the moment though I do like the idea of a response basically being a return value, just like calling a function.

brandon93s commented 6 years ago

I'm a fan of the proposed changes. This would further simplify micro and push the syntax sugar (e.g. createError) into userland where more options will become prevalent.

For transition, most of the functionality could be offered as a userland shim between versions. This would ensure current userland code isn't immediately deprecated and can continue to be used via the inclusion of a version bridge. This would be easy to test - simply extracting current tests for this functionality into the corresponding packages.

Apps written in vanilla micro will become even more simple than they are today given the proposed single method of response. I think this is a good thing.

NetOpWibby commented 6 years ago

~When is this landing in micro? I'm in the process of essentially making my own just so I can customize things but these changes sound great.~

I'm using restify now. Made a GraphQL API server with it recently and it's beautiful.

alisabzevari commented 5 years ago

This makes micro my favorite http framework in node world. Actually, I was trying to build something like this and find this issue in this repo. There is another library written in kotlin by this approach which has an interesting approach:

Every http-server/http-client is a function with this signature: (req: Request) => Promise<Response>. So in that library, http request handlers and http clients have the same type.

I am really excited to see this feature merged into master and I would like to help.

rauchg commented 5 years ago

@alisabzevari thanks for your interest and nice comment. If you want to start implementing this, we can provide guidance. We'd also love to port micro to TypeScript and have great @types support.

rapzo commented 5 years ago

I'd love to help out on this. Count me in for anything.

On Fri, Dec 14, 2018, 8:03 PM Guillermo Rauch <notifications@github.com wrote:

@alisabzevari https://github.com/alisabzevari thanks for your interest and nice comment. If you want to start implementing this, we can provide guidance. We'd also love to port micro to TypeScript and have great @types https://github.com/types support.

— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/zeit/micro/issues/369#issuecomment-447438887, or mute the thread https://github.com/notifications/unsubscribe-auth/AAJBTMprEjpK6nmKWjtLHQs3nyAOkkELks5u5AQbgaJpZM4VpvcI .

rauchg commented 5 years ago

Thanks @rapzo! @alisabzevari please share your email addresses and I'll set up a Slack channel to work on this project.

rapzo commented 5 years ago

I believe my email is public here. Slack? What about spectrum? :-P

Anyways, I'm there with the same handler.

Thanks for the opportunity.

On Fri, Dec 14, 2018, 8:12 PM Guillermo Rauch <notifications@github.com wrote:

Thanks @rapzo https://github.com/rapzo! @alisabzevari https://github.com/alisabzevari please share your email addresses and I'll set up a Slack channel to work on this project.

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/zeit/micro/issues/369#issuecomment-447441188, or mute the thread https://github.com/notifications/unsubscribe-auth/AAJBTCT0KHQt_znhLFCIeV8vl-UQQe5bks5u5AYbgaJpZM4VpvcI .

alisabzevari commented 5 years ago

I would love to help. Here is my email: alisabzevari@gmail.com

rapzo commented 5 years ago

Dunno if got lost in the water. My email is ruipedro.lima@gmail.com I've seen a PR has been open. Wouldn't be better to have a different repo? Tools are gonna be different, for sure.

ianstormtaylor commented 5 years ago

Any update on this from ZEIT folks? I'm curious because this sounds like a great evolution of the API.


Also curious... in the body of @rauchg's writeup there's an example for CORS handling. Where you end up writing a "middleware" like this:

withCORS(req => 
  res(200, 'Hi there')
)

Doesn't this kind of eliminate the benefits of "no middleware"? To me it seems like it would be nice if Micro instead encouraged doing something like:

req => {
  const r = res(200, 'Hi there')
  cors(r)
  return r
}

This way even these middleware stay as simple functions, instead of dealing with the confusion of having to compose all of the "global" middleware.

For example, by avoiding middleware it can be easy to add tracing to specific logic:

req => {
  const r = res(200, 'Hi there')

  trace('prettify', () => {
    r = prettify(r)
  })

  r = cors(r)
  return r
}

I'm thinking about this these days because trying to add tracing to Express-like middleware is near impossible. And even with Koa it's not solved, since you end up accidentally over-counting the durations because middleware await next() inside themselves.


If so, then it seems like res() should be named something else, so that you can do:

const res = somethingElse(...)

Because res is the obvious variable name for the response object itself?


I bring all of this up, not to say that you couldn't do the "middleware" approach. But I'd think that Micro's docs should actively discourage people from doing it, because it eliminates some of the gains it has made. And it would be unfortunate if the ecosystem of NPM packages were still being made in the Express/Koa-like fashion.

What do you think?

alisabzevari commented 5 years ago

Middlewares or here Filters (based on the server as a function article) are mostly used for cross-cutting concerns which are not a part of the business logic in a handler. IMO functionalities like authentication, response deserialization based on request media-type, logging, error handling etc which are all application agnostic are good examples of Filters. You don't want to pollute your handler with these concerns. Also, you don't want to involve them when you write unit tests for your handlers because you want to focus on the business logic to test.

Adding to Gillermo's comment, I think middlewares are bad idea because it obscures the chain of responsibility. It is not easy to understand which middlewares will be involved when the server is handling a request. You have to dig into the code to find out which middlewares will be effective for a request handler. It can be defined globally in a different file, it can be defined for a top-level router or specifically for the route handler that you are focusing on. Although you can mitigate the problem by always specifying all the middlewares specific to the handler but using Filters you can still have more general and composable functionalities and not break the chain of responsibility and have decoupled layers for your application.

But in the end, you are free to use the middlewares as you described.

ianstormtaylor commented 5 years ago

@alisabzevari can you give a small code sample of what you consider a "filter" vs. a "middleware"? I'm not sure exactly what you're arguing for otherwise, because it's hard for me to tell what you think is the ideal way to do things like auth, CORS, error handling, etc.

What I'm arguing is that it's possible that "composition", like:

withCORS(req => 
  res(200, 'Hi there')
)

…being the standard way the community coalesces around adding functionality will still be harmful. Since it makes it very hard to customize those behaviors—for me the use case is adding tracing, but it could be others. What you have is essentially middleware for the things you decide to compose.

Whereas something more imperative like:

(req, res) => {
  cors(req, res)
  send(res, 200, 'Hi there')
}

…doesn't suffer from that issue.

And it also benefits from being able to rely entirely on Node.js built-ins for requests and responses, instead of creating yet another abstraction that isn't core the platform?


It seems similar to me to React's DX, which folks might be familiar with. The older pattern of using HOCs to compose up behaviors is very hard to reason about. Whereas using the more imperative Hooks inline when you need them is much easier to follow.

alisabzevari commented 5 years ago

A Filter is a higher order function which accepts a handler, decorates the handler with some functionality and returns a new handler. The following types define the RequestHandler and Filter (The types are simplified here).

type RequestHandler = Request => Promise<Response>
type Filter = RequestHandler => RequestHandler

// This will be the CORS filter:
function withCors(next) {
  return async (req) => {
    const res = await next(req);
    return res.setHeaders({
      'Access-Control-Allow-Origin': '*'
    });
  }
}

const myHandler = req => res(200, 'Hi there');

const decoratedHandler = withCors(myHandler);

The benefit of this implementation is that myHandler is dependent to withCors.

ianstormtaylor commented 5 years ago

Okay, that's what I figured, thanks!

But I'm not sure I see the benefit to declaring things that way...

const server = micro(withCors(req => {
  return res(200, 'Hi there')
}))

What is gained from the composition? and from the immutability? Trying to figure out where the 'Access-Control-Allow-Origin' comes from is still going to be hard, because you have to walk through all of the composed filters. And trying to only have cors happen sometimes is still going to be hard.

Whereas you achieve the exact same thing with:

const server = micro((req, res) => {
  send(res, 200, 'Hi there')
  cors(res)
})

…but it's much easier to reason about, and toggle on/off functionality.

Or if immutability has real benefits, then:

const server = micro(req => {
  let r = res(200, 'Hi there')
  r = cors(r)
  return r
})

@rauchg what are your thoughts on this? I was kind of excited to think that Micro offered what no other routing system did: a clear, imperative approach to layering logic together to handle a request and write a response. And with things like json(req) it does give you that for request handling. But then this higher-order approach loses the benefits for response writing. Am I missing something?

ianstormtaylor commented 5 years ago

Thinking about this more... One thing that is nice about Micro right now is that it just uses the native req and res objects, which guarantees that developers have full control over the response that is sent out. This come in handy in some cases...

For example right now Micro assumes that you want non-pretty-printed JSON in production, but lots of APIs (eg. Stripe) prefer to make the default pretty-printed instead.

To achieve this currently, you'd need to write your own prettySend() and then just use that instead. But if Micro starts using a custom, immutable Response object that encapsulates all of the sending logic, this becomes impossible?

sch commented 5 years ago

@ianstormtaylor In the same way that you'd write some prettySend function today, would you also be able to create your own apiRes function that's specific to your api?

import { res, Body } from 'micro';
import { OutgoingHttpHeaders } from "http";

function apiRes(body: Body, statusCode?: number, headers?: OutgoingHttpHeaders) {
  if (typeof body === "object" || typeof body === "number") {  
    body = JSON.stringify(body, null, 2);
    headers = headers || {};
    headers["Content-Type"] = "application/json; charset=utf-8";
  }

  return res(body, statusCode, headers);
}
chrisdickinson commented 5 years ago

I'm extremely late to the party on this, but I love this direction. At npm, we did this with spife and it worked out very nicely. (Implementation here).

In particular: we used Symbol to attach metadata (like headers and status) to outgoing responses. That was partially successful: it allowed users to return vanilla JS objects and streams, and decorate them with header data only when necessary. We went a little further and started attaching template info to outgoing responses as well, which allowed us to do some interesting things with React SSR (inspired by Next.js!)

Essentially, this looked like:

server.get('/hello/:target', handler)
async function handler (req, { target = 'world' }) {
  return {
    message: `hello ${target}`
    [Symbol.for('status')]: 418, // teapot
    [Symbol.for('headers')]: {
      'x-clacks-overhead': 'gnu/terry pratchett'
    }
  }
}

However, the tradeoff with symbols are:

So, YMMV with Symbols.

OTOH, a wrapper type can be useful. With entropic (& boltzmann, which is written on top of micro) we used the Fetch API's Response as a wrapper type. The upside: compatibility with the Service Worker API; the downside: you have to serialize the body at the point of response instantiation (or otherwise provide a stream) – it's not super ergonomic for JSON or higher order functions.

In general: with regards to doing work after the response is sent, either approach will look something like this:

async function handler (request) {
  doDeferredWork().catch(err => {
    // log the error
  })
  return {something: 1}
}

async function doDeferredWork() {
  // do deferred work here.
}

And sending back event stream data can look something like this:

const { Readable } = require('stream')
async function handler (request) {
  return new Readable({
    objectMode: true,
    read () {
      this.push({'hello': 'world'})
    }
  })
}

// or
async function *handler (request) {
  for (const xs of [1,2,3]) {
    await sleep(1)
    yield xs
  }
}
Bessonov commented 5 years ago

Hey Guys, love to see TypeScript support. Does it makes sense to split typescript rewrite (part of #383) and discussion about API? This wolud help users of micro and ts to migrate to the new version of micro.

EDIT: I'm aware of https://github.com/zeit/micro/blob/master/packages/micro/micro.d.ts, but I'm more compfortable with TypeScript definitions generated from TypeScript code.

Rokt33r commented 5 years ago

Thanks for the idea. I made this https://github.com/prismyland/prismy Also I've referred API of redux and reselect too. It works really well.

jamiehodge commented 5 years ago

@rauchg one issue I see with this proposal is that it does not appear to provide a mechanism for accumulating/communicating state across composed handlers.

Middleware, as you know, typically mutates the request or response objects, which is bad. Marius' server-as-function paper describes a richer request type that includes, for example, authentication details. Because micro is working with low-level req/res types, that approach would involve progressively extending the request type, which sounds messy.

All this to say that short of some sort of monadic solution, it seems as though the graphql solution of passing a user-defined context object would solve most use cases.

That would mean that handlers would be <T>(req, context: T) => Response and filters would be <A,B>(Handler<A>) => Handler<B> (I'm thinking of the authentication use case).

Bessonov commented 4 years ago

Hey guys, any movement on API, TypeScript or release? Or is micro abandoned?

jamo commented 4 years ago

@Bessonov please checkout micri, it's based on the same fundamentals as micro; we are actively maintaining and working on it, with focus on performance and ensuring it remains small as the name implies: https://www.npmjs.com/package/micri

rauchg commented 4 years ago

@Bessonov this API didn't get traction for various reasons. micro is being used in production and working wonderfully for us and others. We'll focus on other improvements in smaller issues and PRs.