jonataslaw / dart-server-nano

A light, fast, and friendly server written in Dart.
MIT License
23 stars 4 forks source link

server_nano

A light, very fast, and friendly http/websocket server written in dart.

I'm building the same library in rust too, if you want, check it here.

πŸš€ Getting Started

Installation

To integrate server_nano into your Dart project:

dart pub add server_nano

Basic Usage

Here's a basic example to get you started:

import 'package:server_nano/server_nano.dart';

void main() {
  final server = Server();

  // sync requests
  server.get('/', (req, res) {
    res.send('Hello World!');
  });

  // async requests
  server.get('/user/:id', (req, res) async {
    // Simulate a db query delay
    await Future.delayed(Duration(seconds: 2));
    res.send('Hello User ${req.params['id']}!');
  });

  // websockets out-the-box
  server.ws('/socket', (socket) {
    socket.onMessage((message) {
      print(message);
    });

    // rooms support
    socket.join('dev-group');

    socket.emitToRoom(
        'connected', 'dev-group', 'User ${socket.id} connected to dev-group');
  });

  server.listen(port: 3000);
}

How fast is it?

server_nano is designed to be as fast as possible.

Here is a test using wrk to measure the performance of the server in a Macbook Pro M1:

@MacBook-Pro ~ % wrk -t 6 -c 120 -d 10s --latency http://localhost:3000/
Running 10s test @ http://localhost:3000/
  6 threads and 120 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency     1.83ms    4.60ms  93.28ms   96.85%
    Req/Sec    17.12k     3.03k   20.57k    90.17%
  Latency Distribution
     50%    1.02ms
     75%    1.38ms
     90%    2.01ms
     99%   28.34ms
  1022096 requests in 10.00s, 212.49MB read
Requests/sec: 102164.16
Transfer/sec:     21.24MB

In this test, we have a endopoint that returns a simple json object.

// We compile the code with the command: `dart compile exe ./example/app.dart` and `./example/app.exe` to run the server.
Future<void> main() async {
  final server = Server();

  server.get('/', (req, res) {
    res.sendJson({'Hello': 'World!'});
  });

  await server.listen(port: 3000);
}

To compare, here is the same test using express, the most popular web framework for Node.js:

const expressApp = express();

const expressPort = 3003;

expressApp.get("/", (req, res) => {
  res.json({ hello: "world!!!" });
});

expressApp.listen(expressPort, () => {
  console.log(`[server]: Server is running at http://localhost:${expressPort}`);
});
@MacBook-Pro ~ % wrk -t 6 -c 120 -d 10s --latency http://localhost:3003/
Running 10s test @ http://localhost:3003/
  6 threads and 120 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency    10.23ms   30.90ms 542.87ms   98.17%
    Req/Sec     2.99k   358.79     3.72k    89.92%
  Latency Distribution
     50%    6.24ms
     75%    6.91ms
     90%    8.13ms
     99%  164.88ms
  180310 requests in 10.10s, 43.85MB read
Requests/sec:  17848.16
Transfer/sec:      4.34MB

Holy moly! server_nano could handle 101972.97 requests per second, while express could handle only 17848.16 requests per second. That's a huge difference!

So, let's compare the performance of server_nano with fastify, a fast and second more popular web framework for Node.js:

const fastifyPort = 3002;

const fastify = Fastify({
  logger: false,
});

fastify.get("/", (request, reply) => {
  return { hello: "world!!!" };
});

fastify.listen({ port: fastifyPort, host: "0.0.0.0" }, (err, address) => {
  if (err) throw err;
  console.log(`[server]: Server is running at http://localhost:${fastifyPort}`);
});
@MacBook-Pro ~ % wrk -t 6 -c 120 -d 10s --latency http://localhost:3002/
Running 10s test @ http://localhost:3002/
  6 threads and 120 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency     3.32ms    8.68ms 228.04ms   99.00%
    Req/Sec     7.61k   707.93     8.25k    92.24%
  Latency Distribution
     50%    2.53ms
     75%    2.65ms
     90%    2.85ms
     99%   11.75ms
  458601 requests in 10.10s, 83.53MB read
Requests/sec:  45398.17
Transfer/sec:      8.27MB

Good job fastify! But server_nano is still faster! 😎 (a lot faster)

Why to use server_nano? πŸ€”

πŸ“˜ API Reference:

Server:

HTTP:

server_nano supports a variety of HTTP methods like GET, POST, PUT, DELETE, PATCH, OPTIONS, HEAD, CONNECT and TRACE. The syntax for each method is straightforward:

server.get('/path', handler);
server.post('/path', handler);
server.put('/path', handler);
server.delete('/path', handler);
// ... and so on for other methods.

Where handler is a function that takes in a Request and Response object. Example:

 server.get('/user/:id', (req, res) {
    final id = req.params['id'];
    res.send('Hello User $id!');
  });

Request:

The ContextRequest class provides a representation of the HTTP request. It provides several methods and properties to help extract request information:

The MultipartUpload class represents a file or data segment from a 'multipart/form-data' request. It provides methods to convert the upload into a file or JSON representation.

Response:

The ContextResponse class provides a variety of methods to help you construct your response. Here's a list of all the methods available:

Each method is chainable, allowing for a fluent interface when constructing responses. For example:

res.status(200).setContentType('text/plain').send('Hello, World!');

WebSocket:

Server nano supports comprehensive websockets right out-of-the-box, catering to a broad spectrum of real-time applications. The websocket module offers:

You can set up a WebSocket route by calling the ws method on your server instance:

server.ws('/socket', (socket) {
  // Your logic here.
});

Sending:

Broadcasting:

Room Management:

Retrieval:

Event Listeners:

Other:

Middlewares:

Middlewares allow you to manipulate request and response objects before they reach your route handlers. They are executed in the order they are added.

Helmet:

Helmet is a middleware that sets HTTP headers to protect against some well-known web vulnerabilities. Here's an example of how to use the Helmet middleware:

server.use(Helmet());
Headers set by Helmet:

Cors:

Cors is a middleware that allows cross-origin resource sharing. Here's an example of how to use the Cors middleware:

server.use(Cors());

Creating Custom Middlewares:

Creating a custom middleware is straightforward. Simply extend the Middleware class and override the handler method.

class CustomMiddleware extends Middleware {
  @override
  Future<bool> handler(ContextRequest req, ContextResponse res) async{
    // Your custom logic here.

    // Return true to continue to the next middleware.
    // Return false to stop the middleware chain.
    return true;

  }
}

Listen:

To start your server, call the listen method on your server instance:

server.listen(port: 3000);

SSL/TLS:

You can make your server serve over HTTPS by providing SSL/TLS certificate details:

server.listen(
  host: '0.0.0.0',
  port: 8080,
  certificateChain: 'path_to_certificate_chain.pem',
  privateKey: 'path_to_private_key.pem',
  password: 'optional_password_for_key',
);

Serving Static Files:

server_nano supports serving static files out of the box. Simply call the static method on your server instance:

server.static('/path/to/static/files');
Options:

🀝 Contributing

If you'd like to contribute to the development of server_nano, open a pull request.

πŸ“œ License

server_nano is distributed under the MIT License.