Closed vsly-ru closed 1 month ago
Tags to make the ticket searchable: http2.createServer, http2.createSecureServer, createServer, createSecureServer
Possibly related https://github.com/oven-sh/bun/issues/7206
This has become the 22th most upvoted ticket in just 1 day, which I find to be highly unusual.
I'm dying for this to land so I can finally move on from node once and for all
Same - can't use backend integrations for Auth0, Stripe etc. without it, because they strictly require http2 w/ https even in localhost using their dev mode APIs.
10th most requested features in less than a week.
@Electroid , there is a picture forming that the http2 client release only satisfied a minority of the people reacting to the other ticket. I imagine Windows support draw all resources now (and rightly so!), but any updates or plans for this will be appreciated.
Yes, it's clear that folks need this. I think most realistic would be between 1.1 and 1.2, but we don't have specific dates right now.
This has become the 22th most upvoted ticket in just 1 day, which I find to be highly unusual.
Another dev here who has to use JS/TS and refuses to build "yet another JSON REST API microservice" when everyone knows gRPC is the better option.
Probably Bun is being avoid to be used due http2 not in. After this implementation will be some issues related to it, so until there me and my pals gonna stay with node.
An alternative could be Bun's TCP server?
Because of this I'm unable to use Firebase SDK and Even GCP SDKs. Majority of our code is based on the SDKs.
If this is solved, then we can adopt this completely for our internal usage
Me and the gang waiting for this to dropped so we can finally ditch node
Also for running vite with https locally it is required, trying to use the "vite-plugin-mkcert";
Also for running vite with https locally it is required, trying to use the "vite-plugin-mkcert";
Exactly, missing this breaks all frameworks but Next.js through mkcert, and maybe Next too through a similar issue.
Also for running vite with https locally it is required, trying to use the "vite-plugin-mkcert";
I'm pretty sure this is related, but just FYI, when running Vite 4.5 and have basicSsl() plugin included in config (from @vitejs/plugin-basic-ssl
) the error message is very misleading:
% bun --bun vite
error when starting dev server:
TypeError: Invalid path string: path is too long (max: 1024)
at readFile (native)
at <anonymous> (/Volumes/Code/portal/node_modules/vite/dist/node/chunks/dep-52909643.js:54807:11)
at readFileIfExists (/Volumes/Code/portal/node_modules/vite/dist/node/chunks/dep-52909643.js:22374:33)
at <anonymous> (/Volumes/Code/portal/node_modules/vite/dist/node/chunks/dep-52909643.js:22368:21)
at resolveHttpsConfig (/Volumes/Code/portal/node_modules/vite/dist/node/chunks/dep-52909643.js:22361:35)
at <anonymous> (/Volumes/Code/portal/node_modules/vite/dist/node/chunks/dep-52909643.js:24928:142)
at processTicksAndRejections (:12:39)
For anyone stumbling into this issue while getting @httptoolkit/httpolyglot to run in bun (required e.g. for mockttp):
We created a fork which removes http2.createServer
calls. With this, you can run mockttp just fine. Once bun supports this, the fork of course will be deprecated.
Others have mentioned it but this issue absolutely cripples anyone using GCP libraries, including firebase-admin. We need this so bad
@zachsents I agree, I had to use different firebase-functions to execute Firestore queries and maintain a 2nd API, which is not desirable @Jarred-Sumner
Hope this gets fixed, so that I can move it to the main Bun + Elysia.js API
The one feature my team needs to ditch node. I hope this gets implemented sooner than later!
Plz I need this feature for my game server
Looks like Bun v1.1 got support for this https://bun.sh/blog/bun-v1.1#http-2-client
@rhuanbarreto that seems to be about an HTTP2 client (making HTTP2 calls from Bun) not a server.
Yes, it's clear that folks need this. I think most realistic would be between 1.1 and 1.2, but we don't have specific dates right now.
I think we approached that timeframe now. Is there any chance to get an ETA on this? Me and my team are basically on the fence to adopt bun, but this is just a hard blocker for us. I think this is also one of the most awaited features by now.
Keep in mind deno
(or node
) can be used for this. Both support full-duplex streaming using fetch()
. deno
has a built-in WebSocket server. There's no reason why the modern JavaScript programmer can't use bun
, deno
, and node
, and other JavaScript runtimes, at the same time.
@TomasHubelbauer If you are confused by my last comment, see my technical reasoning here Why I use node
, deno
, bun
, qjs
, tjs
at the same time.
@TomasHubelbauer If you are confused by my last comment, see my technical reasoning here Why I use
node
,deno
,bun
,qjs
,tjs
at the same time.
Very interesting, thanks! But that doesn't mean bun shouldn't implement an http2 server
@movva-gpu
But that doesn't mean bun shouldn't implement an http2 server
Correct. I just ain't waiting around for Bun to do this when I have other options I can use right now.
@movva-gpu Other than Node.js's Undici fetch()
implementation not supporting file:
protocol, which we can fix ourselves https://gist.github.com/guest271314/a4f005d9a6b5b433ae6d6e6c5c6d7595 Bun's fetch()
implementation does not support half duplex or full duplex streaming as both node
and deno
do, see https://github.com/oven-sh/bun/issues/7206. So I can't do this https://github.com/guest271314/native-messaging-deno/tree/fetch-duplex which I do using deno
or this https://github.com/guest271314/native-messaging-nodejs/tree/full-duplex whih I do with node
with bun
. For starters Bun could do whatever Undici does - save for the not allowing file:
protocol case - fetch()
.
Same thing with the server. Here's a full-duplex deno
server that you can test for yourself https://gist.github.com/guest271314/d20b2a2924d0e2e0c333c01d8b8acace.
So one runtime not having this or that implemented don't stop my projects because I don't entertain a preference for any single JavaScript runtime and regularly use at least 5 JvaScript runtimes to exploit the respective features they have for projects. Nothing is stopping anybody from using deno
for a full-duplex server until bun
get's their own gear working. Or, don't, nd wait for Bun. I ain't doing that.
@guest271314 Not waiting for Bun to add stuff it currently doesn't have is great! Anyone can mix and match what's at their disposal at any one time. This ticket is about adding HTTP2 to Bun though, so presumably most people subscribed to it are actually waiting for Bun to add this and sharing relevant details (like the recent HTTP2 client support) in the meanwhile. There is value in not having to complicate one's deployment and infrastructure and this value may be worth the wait to many.
Thats why i'll probably switch to deno or something while waiting
@TomasHubelbauer There's no reason to wait for anything. We have child process capabilities in Bun. Thus we can use any of the various HTTP/2, HTTP/3, QUIC implementations at large to achieve the goal. In the case of Deno, since they figured out a way to reduce the result of compile
to about half the size of the executable we can write the server in JavaScript in deno
, use compile
and have a working full-duplex server with child process, today.
Or, wait around for somebody else to do something. I am a programmer. I ain't waiting for anybody to do something. I'm gonna hack something together and make it so.
People say they want this or that, then when they get it they still want something else.
You asked for how to handle arbitrary file extension and specifiers for static and dynamic Ecmascript Modules. I created working solution in Bun and Node.js. Not a word from you since then.
Think about it. Both Bun and Deno talk about Node.js compatibility. The only way to verify Node.js compatibility is to constantly run node
and bun
or deno
at the same time, which means in this case node
has to be on hand to use for an HTTP/2 server.
Personally I prefer using WHATWG Streams and WHATWG Fetch's Response
in the server, so I use Deno's full-duplex server.
Given all of the people in this post, we can make this happen on our own, yesterday.
@movva-gpu
Thats why i'll probably switch to deno or something while waiting
It's probably a good idea for JavaScript programmers to have at least bun
, deno
, node
, qjs
, and tjs
on hand at all times, and also CloudFlare Workerd, Bytecode Alliance's Javy, VM Ware Labs WASM Workers Server.
Alright, here's a HTTP/2 full-duplex server compiled from Deno source code and run from Bun.
deno_full_duplex_server.js
which is a live server on Deno Deploy at https://comfortable-deer-52.deno.dev. All it does right now is expect lowercase letters encoded as Uint8Array
and sends back the same letters as uppercase. You'll have to supply your own certificate and key. Compiled to a standalone executable with deno compile -A --unsafely-ignore-certificate-errors=localhost --unstable deno_full_duplex_server.js
const responseInit = {
headers: {
'Cache-Control': 'no-cache',
'Content-Type': 'text/plain; charset=UTF-8',
'Cross-Origin-Opener-Policy': 'unsafe-none',
'Cross-Origin-Embedder-Policy': 'unsafe-none',
'Access-Control-Allow-Origin': '*',
'Access-Control-Allow-Private-Network': 'true',
'Access-Control-Allow-Headers': 'Access-Control-Request-Private-Network',
'Access-Control-Allow-Methods': 'OPTIONS,POST,GET,HEAD,QUERY',
},
};
for await (
const conn of Deno.listenTls({
port: 8443,
certFile: 'certificate.pem',
keyFile: 'certificate.key',
alpnProtocols: ['h2', 'http/1.1'],
})
) {
for await (const {
request,
respondWith
}
of Deno.serveHttp(conn)) {
if (request.method === 'OPTIONS' || request.method === 'HEAD') {
respondWith(new Response(null, responseInit));
}
if (request.method === 'GET') {
respondWith(new Response(null, responseInit));
}
try {
const stream = request.body
.pipeThrough(new TextDecoderStream())
.pipeThrough(
new TransformStream({
transform(value, c) {
c.enqueue(value.toUpperCase());
},
async flush() {
},
})
).pipeThrough(new TextEncoderStream());
respondWith(new Response(
stream, responseInit));
} catch (e) {
}
}
}
Run in Bun with bun run bun_spawn_server.js
Bun.spawn(["deno_full_duplex_server"]);
Bun does not support duplex: "half"
passed to Bun's fetch()
implementation Implement fetch() full-duplex streams (state Bun's position on fetch #1254), so tested with Deno, for now.
deno run -A --unsafely-ignore-certificate-errors=localhost deno_full_duplex_client.js
DANGER: TLS certificate validation is disabled for: localhost
LIVEANOTHER DUPLEX WRITE
deno_full_duplex_client.js
const {readable, writable} = new TransformStream();
let abortable = new AbortController();
let {
signal
} = abortable;
fetch("https://localhost:8443", {
method: "post",
duplex: "half",
body: readable.pipeThrough(new TextEncoderStream()),
signal
}).then((r) => {
r.body.pipeThrough(new TextDecoderStream()).pipeTo(new WritableStream({
write(value) {
console.log(value);
},
close() {
console.log("Closed");
}
}))
}).catch(console.error);
const writer = writable.getWriter();
await writer.write("live");
await writer.write("another duplex write");
Next I'll try to import Node.js Undici fetch()
implementation into a Bun script - which does support duplex: "half"
for upload streaming and full-duplex streaming over HTTP/2, which is what is going on above.
I imported Node.js' Undici fetch()
into Bun to run the same code in Bun that we run in Deno.
The first error is Bun does not support TextEncoderStream()
and TextDecoderStream()
.
I substituted using a TextEncoder
and TextDecoder
in a TransformStream
chained to the ReadableStream
posted to fetch()
and in then()
chained to fetch()
import { fetch as undiciFetch } from "undici";
const {readable, writable} = new TransformStream();
const encoder = new TextEncoder();
const decoder = new TextDecoder();
let abortable = new AbortController();
let {
signal
} = abortable;
undiciFetch("http://localhost:8000", {
method: "post",
duplex: "half",
body: readable.pipeThrough(new TransformStream({
transform(value, controller) {
controller.enqueue(encoder.encode(value));
}
})),
signal
}).then((r) => {
r.body.pipeThrough(new TransformStream({
transform(value, controller) {
controller.enqueue(decoder.decode(value));
}
})).pipeTo(new WritableStream({
write(value) {
console.log(value);
},
close() {
console.log("Closed");
}
}))
}).catch(console.error);
The second error is
bun run deno_full_duplex_client.js
CERT_HAS_EXPIRED: certificate has expired
path: "https://localhost:8443/"
There does not appear to be a way to ignore certificate errors in Bun, as we can in Deno (and the last time I checked, Chromium browser).
I re-compiled the Deno server script to a standalone executable using Deno.listen()
instead of Deno.listenTls()
. However, I still get close()
method of the WritableStream()` being called, without any data being sent back.
bun run deno_full_duplex_client.js
Closed
Bun still does not support full-duplex streaming.
So supporting an HTTP/2 or HTTP/3 server needs to be coupled with full-duplex client, or we are going to get one part that works and one part that doesn't. e.g., when a request is piped through another request using fetch()
it's simply not going to work.
The issue for the client code might be Node.js' Undici fetch()
deno run -A --unstable-byonm --unsafely-ignore-certificate-errors=localhost deno_full_duplex_client.js
DANGER: TLS certificate validation is disabled for: localhost
TypeError: fetch failed
at fetch (file:///home/user/bin/node_modules/.deno/undici@6.12.0/node_modules/undici/index.js:109:13)
at Object.runMicrotasks (ext:core/01_core.js:642:26)
at processTicksAndRejections (ext:deno_node/_next_tick.ts:53:10)
at runNextTicks (ext:deno_node/_next_tick.ts:71:3)
at eventLoopTick (ext:core/01_core.js:175:21)
Caused by Error: connect ECONNREFUSED 127.0.0.1:8000 - Local (null:undefined)
at __node_internal_captureLargerStackTrace (ext:deno_node/internal/errors.ts:91:9)
at __node_internal_exceptionWithHostPort (ext:deno_node/internal/errors.ts:215:10)
at TCPConnectWrap._afterConnect [as oncomplete] (node:net:170:16)
at TCP.afterConnect (ext:deno_node/internal_binding/connection_wrap.ts:43:11)
at ext:deno_node/internal_binding/tcp_wrap.ts:302:14
at eventLoopTick (ext:core/01_core.js:168:7)
There's just no way around Bun needing to implement full-duplex streaming both for server and client code - without relying on Node.js. The whole compatible with Node.js thing, as I see it, has problems, for both Bun and Deno. Just stand on your own without constantly trying to be Node.js compatible - to avoid carrying Node.js bugs and implementation decisions, too.
@guest271314 I respect the crafty solutions but how can this help with libraries dependent on gRPC that are broken b/c of this missing support?
I tried the best way I could to help. The result is my contributions here marked as "off-topic". You'll have to roll your own or wait on somebody else to make this so, or use JavaScript runtimes that support full-duplex streaming.
@zachsents You try somehow using Bun.listen()
which last time I checked does support full-duplex streaming with TLS options https://github.com/guest271314/telnet-client/blob/user-defined-tcpsocket-controller-web-api/direct-sockets/bun_echo_tcp.js.
@zachsents
I respect the crafty solutions but how can this help with libraries dependent on gRPC that are broken b/c of this missing support?
Honestly, more crafty solutons, from all stakeholders. Sort it out. It ain't gonna happen for both server and client by more comments asking here. The same folks asking need to be writing code at the same time trying to figure it out themselves. After all, the same programmers are gonna be using the code - and filing more bugs. Might as well get ahead of the curve. There are multiple available HTTP/2, HTTP/3 libraries that can be used. QuickJS doesn't come with a built in HTTP(S) server. A few folks figured out how to make QuickJS the foundation of server code, from WasmEdge to VM Labs WASM Workers Server, to this little script https://github.com/guest271314/webserver-c/tree/quickjs-webserver. Write some code. Keep notes. Get your hack on. Make it so No. 1. Good luck!
This https://github.com/molnarg/node-http2/blob/master/lib/http.js should help understand what is being requested. In theory we can cobble this together with a base of Bun.listen()
is a day or so.
Here's what the API looks like in Node.js.
Notice that Node.js does not use WHATWG Streams directly. We have to use toWeb()
from Duplex
. Additionally Node.js doesn't provide the capabilty to serve a WHATWG Fetch Response()
object. Those are areas the Bun implementation can improve to resemble the Deno implementation (see above at https://github.com/oven-sh/bun/issues/8823#issuecomment-2043977934) more than the Node.js version
import { createSecureServer } from "node:http2";
import { readFileSync } from "node:fs";
import { Duplex } from "node:stream";
class Http2Server {
responseInit = {
status: 200,
headers: {
"Cache-Control": "no-cache",
"Content-Type": "text/plain; charset=UTF-8",
"Cross-Origin-Opener-Policy": "unsafe-none",
"Cross-Origin-Embedder-Policy": "unsafe-none",
"Access-Control-Allow-Origin": "*",
"Access-Control-Allow-Private-Network": "true",
"Access-Control-Allow-Headers": "Access-Control-Request-Private-Network",
"Access-Control-Allow-Methods": "OPTIONS,POST,GET,HEAD,QUERY",
},
};
constructor(cert, key, port, options) {
this.server = createSecureServer({
key: readFileSync("certificate.key"),
cert: readFileSync("certificate.pem"),
});
this.server.on("error", (err) => console.error(err));
this.server.on("connection", (socket) => {
console.log("Socket");
});
this.server.on("request", (request) => {
console.log("Request", request.headers[":authority"], request.method);
});
this.server.listen(port);
}
async *[Symbol.asyncIterator]() {
const fn = async (stream, headers) => {
controller.enqueue({ stream, headers });
};
let controller;
const readable = new ReadableStream({
start(c) {
return (controller = c);
},
});
const reader = readable.getReader();
this.server.on("stream", fn);
while (true) {
const { value, done } = await reader.read();
yield value;
}
}
}
async function handleRequest({ stream, headers }) {
const method = headers[":method"];
console.log("handleRequest");
if (method === "OPTIONS") {
stream.respond({
...server.responseInit.headers,
":status": 204,
});
stream.end();
return;
}
if (method === "POST") {
const { readable, writable } = Duplex.toWeb(stream);
stream.respond(server.responseInit);
console.log(readable);
return await readable.pipeThrough(new TextDecoderStream())
.pipeThrough(
new TransformStream({
transform(value, c) {
console.log(value);
c.enqueue(value.toUpperCase());
},
async flush() {
console.log("flush");
},
}),
).pipeThrough(new TextEncoderStream()).pipeTo(writable);
}
}
const key = readFileSync("certificate.key");
const cert = readFileSync("certificate.pem");
const server = new Http2Server(key, cert, "8443");
for await (const { stream, headers } of server) {
await handleRequest({ stream, headers });
}
Bun has to implement TextDecoderStream
and TextEncoderStream
, too, to catch up to Deno and Node.js. We've already covered that Bun doesn't support duplex: "half"
for upload streaming with fetch()
, so we will still will not be able to upload ReadableStream
s if HTTP/2 (h2) is not supported by Bun's fetch()
implementation.
Here's an example of the what can happen when one implementation tries to follow another implementation without rolling their own: implementation details.
Before you configure
Your Cloud Run service must handle requests in HTTP/2 cleartext (
h2c
) format.
Now, let's see if Node.js support what Google Cloud says must be supported
fetch support for HTTP/2 by default #2750
HTTP/2 support in undici is experimental and not enabled by default. Note that we do not support H2, but only HTTP/2 (over TLS), due the necessary protocol selection.
https://github.com/nodejs/undici/issues/2750#issuecomment-2041363510
I am trying to understand the nuance of not supporting H2, but only HTTP/2.
I meant H2C, which is the non-tls variant.
Any ETA on this? Our team is waiting for this to be implemented, but from the date of this issue, it doesn't look like it's going to be implemented before another year. We really want to adopt bun, but the progress is just not encouraging.
if we had grpc server for bun many enterprise companies were using bun already
Indeed but for now, either use deno or any other http2 solution
if we had grpc server for bun many enterprise companies were using bun already
Indeed but for now, either use deno or any other http2 solution
Actually, the client part of gRPC shipped in Deno 1.44 released only few hours ago, so milage might vary there since the server part is still missing. Looking forward to see this in Bun.
Ah the client yes well I don't know, but if it's been shipped, it's probably worth taking a look at it at least.
Wait, HTTP/2 is a 9-year-old spec, is Bun really just stuck on 1.1 while I'm out here tallying the pros and cons of moving some of my projects to HTTP/3?
That's exactly that ^^'
Actually, the client part of gRPC shipped in Deno 1.44 released only few hours ago, so milage might vary there since the server part is still missing. Looking forward to see this in Bun.
I'm interested in HTTP/2 support in Bun without tying that support to gRPC, so we can have the capability to full-duplex stream with fetch()
using duplex: "half"
with bun
like we can using node
and deno
.
bun run -b full_duplex_fetch_test.js
795.849447
Stream closed
deno run -A full_duplex_fetch_test.js
1883.904654
TEST
TEST, AGAIN
Stream closed
node --experimental-default-type=module full_duplex_fetch_test.js
1356.602903
TEST
TEST, AGAIN
Stream closed
What is the problem this feature would solve?
As described in #887, HTTP2 support is mandatory for gRPC to work. We appreciate the bun team for successfully implementing HTTP2 client-side support, enabling connectivity to gRPC servers.
However, on the server side, the issue persists, preventing the operation of gRPC servers. Unfortunately, there seems to be a lack of prioritization for server support, evident from the absence of an ETA and limited developer replies on the issue.
So I assumed that the team determines feature implementation priority based on the upvotes received for an issue. Given that #887 may be deemed partially resolved, it's possible that its upvotes no longer influence the priority of the ongoing server-side concern.
This new issue is raised to emphasize the continued importance of server-side HTTP2 support. We hope to bring back the attention by collecting upvotes here. Thank you.
What is the feature you are proposing to solve the problem?
Implement HTTP2 server support in Bun.
What alternatives have you considered?
No response