oven-sh / bun

Incredibly fast JavaScript runtime, bundler, test runner, and package manager – all in one
https://bun.sh
Other
74.15k stars 2.77k forks source link

Implement fetch() full-duplex streams (state Bun's position on fetch #1254) #7206

Open guest271314 opened 11 months ago

guest271314 commented 11 months ago

What is the problem this feature would solve?

Bun does not implement full-duplex streaming for fetch(). Node.js and Deno do.

What is the feature you are proposing to solve the problem?

Fetch body streams are not full duplex #1254.

$ deno run -A full_duplex_fetch_test.js
495.081126
TEST
TEST, AGAIN
Stream closed
$ node --experimental-default-type=module full_duplex_fetch_test.js
1286.0589990019798
TEST
TEST, AGAIN
Stream closed
$ bun run full_duplex_fetch_test.js
24112.282304
Stream closed

Bun does not full duplex stream and hangs when trying to. full_duplex_fetch_test.js.zip

What alternatives have you considered?

No response

guest271314 commented 6 months ago

Node.js and Deno converts streaming requests to Transfer-Encoding: chunked requests when duplex: "half" is set on fetch() or Request() options and the server does not respond with HTTP/2 headers.

Bun fails to upload the ReadableStream entirely.

var {readable, writable} = new TransformStream({}, {}, {highWaterMark: Infinity});
var writer = writable.getWriter();
var encoder = new TextEncoder();
var decoder = new TextDecoder();

async function fn(body = "test") {
  return fetch("http://0.0.0.0:8080", {
    duplex: "half",
    method: "post",
    body,
    headers: {
      "Access-Control-Request-Private-Network": true,
    }
  })
  .then((r) => {
  console.log(...r.headers, r.body);
  r.body.pipeTo(new WritableStream({
    write(v) {
      console.log(decoder.decode(v));
    }, 
    close() {
      console.log("Stream closed");
    }
  })).catch(console.warn);
 }).catch(console.error);
}

fn(readable);
await writer.ready;
await writer.write(encoder.encode("test "));
await writer.write(encoder.encode("another test "));
await writer.close();

Bun

POST / HTTP/1.1
access-control-request-private-network: true
Connection: keep-alive
User-Agent: Bun/1.1.5
Accept: */*
Host: 0.0.0.0:8080
Accept-Encoding: gzip, deflate, br
Content-Length: 0 
script.js:205 POST / HTTP/1.1
access-control-request-private-network: true
Connection: keep-alive
User-Agent: Bun/1.1.5
Accept: */*
Host: 0.0.0.0:8080
Accept-Encoding: gzip, deflate, br
Content-Length: 0

Node.js doesn't include headers in Nth Transfer-Encoding: chunk request so depending on the server logic the request can be interpreted as WebSocket or other data. Notice we don't get the 2d part of the data we wrote

POST / HTTP/1.1
host: 0.0.0.0:8080
connection: keep-alive
Access-Control-Request-Private-Network: true
accept: */*
accept-language: *
sec-fetch-mode: cors
user-agent: node
accept-encoding: gzip, deflate
transfer-encoding: chunked

5
test 
script.js:321 Got a POST request
script.js:326 POST / HTTP/1.1
host: 0.0.0.0:8080
connection: keep-alive
Access-Control-Request-Private-Network: true
accept: */*
accept-language: *
sec-fetch-mode: cors
user-agent: node
accept-encoding: gzip, deflate
transfer-encoding: chunked 5
test 
script.js:205 POST / HTTP/1.1
host: 0.0.0.0:8080
connection: keep-alive
Access-Control-Request-Private-Network: true
accept: */*
accept-language: *
sec-fetch-mode: cors
user-agent: node
accept-encoding: gzip, deflate
transfer-encoding: chunked

Deno handles the uploaded ReadableStream to a TCP server as multiple POST requests and manages to get all off the data to the client

POST / HTTP/1.1
access-control-request-private-network: true
accept: */*
accept-language: *
user-agent: Deno/1.42.4
accept-encoding: gzip, br
host: 0.0.0.0:8080
transfer-encoding: chunked 12
test another test 
0
script.js:205 POST / HTTP/1.1
access-control-request-private-network: true
accept: */*
accept-language: *
user-agent: Deno/1.42.4
accept-encoding: gzip, br
host: 0.0.0.0:8080
transfer-encoding: chunked