valyala / fasthttp

Fast HTTP package for Go. Tuned for high performance. Zero memory allocations in hot paths. Up to 10x faster than net/http
MIT License
21.83k stars 1.76k forks source link

HTTP/2 support #144

Open liclac opened 8 years ago

liclac commented 8 years ago

Is this planned at all? And if I was interested in implementing it myself, where would I start?

ernado commented 8 years ago

Hi, @uppfinnarn!

Is this planned at all?

https://github.com/valyala/fasthttp/blob/master/TODO HTTP 2.0 is in TODO.

if I was interested in implementing it myself, where would I start?

Because fasthttp implements http stack from scratch, http2 support should implement RFC 7540 from scratch.

So, IMHO you can start just implementing RFC 7540 with the motto "Tuned for high performance. Zero memory allocations in hot paths." in mind and trying to reuse existing low-level pieces (e.g. bytesconv_*.go) It will take a lot of time, effort and research.

newtack commented 8 years ago

Are there any estimates as to when this would be available?

akyoto commented 8 years ago

This is a very important feature, HTTP/2 support would make the package faster in real word applications:

Right now fasthttp might be winning unrealistic benchmarks but the official Go package supports HTTP/2 which is a big plus in real world applications.

I'd also be interested in helping fasthttp implementing HTTP/2. Maybe take a look at H2O's C implementation, the fastest implementation afaik.

DevotionGeo commented 7 years ago

@valyala when will HTTP/2 support come to fasthttp? It will make it the preferred http package for Go.

Thank You!

xgfone commented 7 years ago

@DevotionGeo Since HTTP/2 is in TODO, but it gives me a feeling that there's no person to do it.

xxx7xxxx commented 6 years ago

HTTP/2 is worth implementing.

gnanakeethan commented 6 years ago

Are there any progress on HTTP/2?

dgrr commented 6 years ago

@gnanakeethan Hello. In this repository github.com/erikdubbelboer/fasthttp we are working to improve fasthttp library. Now we are working to implement http2.

savsgio commented 6 years ago

@themester Do you continue developing HTTP/2 for fasthttp?

savsgio commented 6 years ago

Or is there someone developing it?

dgrr commented 6 years ago

@savsgio I stopped because of exams. I want to follow after the next week.

dgrr commented 6 years ago

I created a repo here to develop http2 for fasthttp @kirillDanshin

kirillDanshin commented 6 years ago

@dgrr awesome! thanks for your work. is there any roadmap and current status? when you finish your work, we can embed it in the fasthttp and finally support http/2 officially. also, feel free to ping me if you have any questions

dgrr commented 6 years ago

@kirillDanshin we talk about embedding it in fasthttp and finally I decided to do not do this. Fasthttp has a good support for HTTP/1.x. Embedding HTTP/2 will cause the change of the API internally making methods be forked to support HTTP/1.x and HTTP/2 and fasthttp more heavyer and probably slower in a few cases. This is caused because of the framing method that HTTP/2 follows. It is not header and body. You can see my poor work here

kirillDanshin commented 6 years ago

@dgrr I think we should at least try and benchmark embedded version and see if we can get any better solution for this

dgrr commented 6 years ago

Okay. Let me finish fasthttp2 and we will try to embed it in original repo.

kirillDanshin commented 6 years ago

@dgrr huge shout-out to you for that work 👍

dgrr commented 5 years ago

There are anyone interested in participate in this project?

erikdubbelboer commented 5 years ago

I'm afraid I currently don't do anything with http2. I see you have continued working on your implementation. How is that going?

dgrr commented 5 years ago

I am blocked working on it. I tried to implement it in my repo but the main problem is the RequestCtx header fields handling.

HTTP/2 follows a frame by frame schema (like Websocket). There are many types of frames. The two main types are Headers and Data which in HTTP/1.x language are the same as the headers and a body in a HTTP Request/Response. Also HTTP/2 is a multiplexed protocol which allows to handle multiple streams (multiple request/responses) in the same connection. I thought to implement this with multiple RequestCtx's (RequestCtx per stream). But I have troubles handling RequestCtx in other package (third-party package). So... I am trying to implement HTTP/2 in fasthttp natively but there are so many changes to do and this will be a huge change. I think that I can create a http2 branch in this repo and start making pull request along the developing process to help you to check every commit I do.

If you agree I can commit changes in my fasthttp fork and start making pull request in other branch (not master. http2?).

erikdubbelboer commented 5 years ago

I think if you implement http2 in the fasthttp repo instead of a separate one it makes the most sense to have it be one big pull request/commit. So working on a separate http2 branch in this repo and making pull requests to that branch sounds like a good idea. When everything works then we can rebase on master and squash into one big commit.

skiloop commented 5 years ago

There are anyone interested in participate in this project?

@dgrr I would like to participate in! How can I help? How is it going? Do you have any plan about the project?

dgrr commented 5 years ago

Hello @skiloop

Right now it's stopped because I am a little busy with my work and that stuff. But you can take a look here https://github.com/dgrr/http2 I successfully develop an adapter which is able to read the request but no to send a response. My plan is to adopt HTTP/2 natively in fasthttp, but for the moment I want to develop first a package to use HTTP/2 independently.

To summarize:

  1. Finish the http2 package.
  2. Make the http2 package works with fasthttp via adapter.
  3. Copy that package functionalities to fasthttp so it can handle HTTP/2 requests natively.
cipriancraciun commented 5 years ago

Lately I've been working on a small "pet project" regarding a high performance static server, and I thought that for some use-cases HTTP/2 would be a perfect match (i.e. lots of small requests).

However given the complexity of HTTP/2, I would incline more towards keeping it out of the "main" fasthttp project, mainly because the current implementation is so HTTP/1 oriented that it would have a large impact retrofitting it to support both HTTP/1 and HTTP/2.

I wonder if it wouldn't be simpler for one to implement a fast HTTP/2 to HTTP/1 gateway, that would also benefit other projects.

cipriancraciun commented 5 years ago

Today I wondered "what if":

Granted, it will be sub-optimal, but it would still allow one to have HTTP/2 and use fasthttp for HTTP/1.1 on TLS connections.


I've tried implementing this idea: https://github.com/volution/kawipiko/blob/b6426f56552d9ec3583491d532df89788cb16f14/sources/cmd/server/server.go#L1271-L1327 https://github.com/volution/kawipiko/blob/b6426f56552d9ec3583491d532df89788cb16f14/sources/cmd/server/server.go#L1374 https://github.com/volution/kawipiko/blob/b6426f56552d9ec3583491d532df89788cb16f14/sources/cmd/server/server.go#L1387 (Ignore the rest of the code, as it's in a quite bad shape due to rapid prototyping...)

And apparently it works flawlessly, by using curl --http1.1 or curl --http2.


Any thoughts about this proposed "hack"?

Perhaps we could implement some of this code as part of fasthttp?

dgrr commented 5 years ago

@cipriancraciun I will develop more this comment but just want to let you know that I already successfully develop something like a gateway for HTTP/2 to fasthttp. The only concern I have on doing that is the performance. You can check that here

cipriancraciun commented 5 years ago

Out of curiosity does anyone have any "numbers" (i.e. benchmarks, experiments, etc.) regarding the performance impact of HTTP/2 vs HTTP/1.1 (both within TLS) on the server-side? (I know there are real benefits for HTTP/2 on the client side, especially in high latency, resource heavy sites; however I failed to find any server side experiments.)

I have the faint feeling that HTTP/2 is actually meant for large CDN providers (i.e. Google, AWS, CloudFlare, etc) that can trade off some server-side performance (how much?) for some client-side benefits... I remember reading the RFC when it came out, and it was mind-bugling, and it was only the "encoding" part as the semantic is the same as for HTTP/1.1...

@dgrr As a sidenote, "how much" HTTP/2 should fasthttp support? Do we stop at server-side-push, do we include prioritization? Do we stop only for the semantic that maps over HTTP/1.1 (i.e. excluding "streams" and other HTTP/2-only features)?

dgrr commented 5 years ago

About the "how much", I think we should support the minimum requirements which reading the request using a header and a body frame and returning a response using the same frames. Server-side-push it's more difficult to be developed in a package like this, I think that task is a matter of another framework like gramework or something like this. And no, I don't think we should support priorities, but that point is something to think about because it depends on how HTTP/2 has been implemented in a package and how do you handle the responses in your package.

cipriancraciun commented 5 years ago

@dgrr My question about "how much should we support", is actually a consequence of he question "what is the purpose of HTTP/2 in fasthttp"?

Because if we state that "fasthttp should just "speak" the basic HTTP/2 protocol as alternative to HTTP/1.1", then I think it's almost pointless, simply because I don't think there will be a large advantage (for well designed websites) over HTTP/1.1. (And I say "websites", because those are the main "clients" for which HTTP/2 would give an advantage.)

If however we state that "fasthttp should implement HTTP/2 in order to leverage the various HTTP/2 only features, that would allow better websites performance", then I would say that server push is a requirement (although one can implement an alternative using Link headers).

cipriancraciun commented 5 years ago

As a followup, last week I've made a small experiment with a single page that loads ~1400 small images, and compared HTTP/2 vs HTTP/1.1 in both Firefox and Chromium; additionally I've also experimented with 8 domain shards. The following is a small recording of that page loading in Chrome with sharding for both HTTP/2 and HTTP/1.1 (on the same page there are other screen recordings for the other cases): https://notes.volution.ro/v1/2019/08/notes/e8700e9a/#side-by-side-with-8-shards

Indeed HTTP/2 does have advantages over HTTP/1.1 in such an extreme scenario; however using 8 domain shards (and the additional DNS pre-fetch and HTTP pre-connect tricks), even HTTP/1.1 comes close to HTTP/2, so that from a "user experience" point of view they "look and feel" the same.

Therefore as said earlier, if we just want to implement HTTP/2 as a "framing alternative" then I personally would be reluctant to deploy it, given how much complexity it adds and how many risks it opens, especially for such a "young" implementation with regard to the overall complexity that HTTP/2 brings. (Here I'm hinting at the HTTP/2 issues that were released last week, in which vulnerable servers did correctly implement HTTP/2 semantic, however they failed to take into account how those features might be abused by malicious clients...)

@dgrr As such my question, in followup to my previous comment, is "what do we offer in terms of HTTP/2, so that it is worth the added complexity and risks"?

ernado commented 5 years ago

Note that nginx does not support http/2 as protocol for reverse-proxy:

There are no plans to implement HTTP/2 support in the proxy module in the foreseeable future

There is almost no sense to implement it, as the main HTTP/2 benefit is that it allows multiplexing many requests within a single connection, thus [almost] removing the limit on number of simalteneous requests - and there is no such limit when talking to your own backends. Moreover, things may even become worse when using HTTP/2 to backends, due to single TCP connection being used instead of multiple ones.

So, if you put your site behind services like Cloudflare, you will be getting HTTP/1.1. Otherwise you should do the TLS termination too, and I'm not sure that it is not a bottleneck. Seems like the whole crypto/tls should be re-implemented with zero-allocation approach to gain any benefit from using fashttp as TLS termination point, please correct me if I'm wrong.

Also note that HTTP/3 aka QUIC is approaching and probably it is reasonable to consider starting from QUIC.

cipriancraciun commented 5 years ago

@ernado Otherwise you should do the TLS termination too, and I'm not sure that it is not a bottleneck. Seems like the whole crypto/tls should be re-implemented with zero-allocation approach to gain any benefit from using fashttp as TLS termination point, please correct me if I'm wrong.

Not quite, there is the option of H2C (i.e. HTTP/2 in plain text without the TLS layer), although non-standard a good option for proxy to backend communication.


However, I have the feeling that each of us has different use-cases in mind when one speaks about HTTP/2.

Therefore let's first list which HTTP/2 use-cases would make sense for fasthttp, which HTTP/2 feature does it require, and which makes sense to be followed.

So, here it goes; yes means it is worth implementing HTTP/2:

Therefore I would say that only the use-case of directly facing website server would actually make use of HTTP/2; moreover even in that case it would make sense in conjunction with some other HTTP/2 features like server push or prioritization.


Also note that HTTP/3 aka QUIC is approaching and probably it is reasonable to consider starting from QUIC.

(I have the feeling that QUIC is even more the purview of large CDN and service providers...)

Perhaps it makes sense to implement both HTTP/2 and HTTP/3 in tandem, because (although I haven't read the HTTP/3 draft) I think they have quite a lot in common...

ernado commented 5 years ago

@cipriancraciun great summary!

fasthttp is a client-facing server, serving modern websites, with many resources or heavy AJAX requests; HTTP/2 could be used to reduce latency, by leveraging TCP connection multiplexing, request pipelining, and other features like prioritization (established by client or server), server-side push, etc.;

I fully agree on this use-case.

But I'm currently not sure that TLS termination will not kill our performance benefits that we are getting from fasthttp. Seems like all benchmarks imply no TLS termination, e.g. TechEmpower one. Browsers refused to implement H2C, so TLS will be mandatory and I'm afraid that we are (and will) be bottle-necked with crypto/tls, especially with RSA keys. I'm quite familiar with crypto/tls implementation (was trying to implement DTLS) and it is pretty sub-optimal in terms of allocations.

Are there any TLS benchmarks for fashttp to measure current overhead?


Perhaps it makes sense to implement both HTTP/2 and HTTP/3 in tandem, because (although I haven't read the HTTP/3 draft) I think they have quite a lot in common...

HTTP/3 will be UDP-based, so I'm not sure that we will be able to share a lot of code.

cipriancraciun commented 5 years ago

@cipriancraciun great summary! But I'm currently not sure that TLS termination will not kill our performance benefits that we are getting from fasthttp.

@ernado I'm certain that TLS termination will add a lot of overhead in terms of CPU. However we can't help that; it's the same overhead for both HTTP/1.1 and HTTP/2.

Perhaps the "optimal" deployment scenario would be something like this:

Alternatively one could use another library, with Go bindings, that handle the TLS.

In the end I think that fasthttp must support H2C in addition to the actual HTTP/2 over TLS, to allow "efficient" deployments.


Perhaps it makes sense to implement both HTTP/2 and HTTP/3 in tandem, because (although I haven't read the HTTP/3 draft) I think they have quite a lot in common...

HTTP/3 will be UDP-based, so I'm not sure that we will be able to share a lot of code.

I was referring mainly to the structures and algorithms required HTTP/2 and HTTP/3 request / response model and payloads, not the actual transport.

inductor commented 4 years ago

This project has been no progress for more than a half year now :c

dgrr commented 3 years ago

Hello. I know it's been long but just wanted to tell you all that I have a working on HTTP/2 a little. Example here.

Why after 2 years I recovered that old work? Because the HTTP/2 library of the Golang's std sucks quite a lot. fasthttp (using HTTP/1.1) is faster than the Golang's HTTP/2. I tested that in a server made in with the net/http. That's ridiculous.

I want to create a full implementation (so client and server). If someone wants to help, I'll be appreciated!

renanbastos93 commented 3 years ago

Do we have a date planned to use http2 on fasthttp?

dgrr commented 3 years ago

For now I am a bit busy. I'll continue working in a few weeks. The client is my priority number one. I'll do the server later.

renanbastos93 commented 3 years ago

For now I am a bit busy. I'll continue working in a few weeks. The client is my priority number one. I'll do the server later.

alright, can you need help? anything you can call me

dgrr commented 3 years ago

@renanbastos93 you can pick the issue you want and start a discussion/solving the issue

efectn commented 3 years ago

Do you have any plan to release stable HTTP/2 adapter ?

wxpjimmy commented 2 years ago

Is there any ETA on this? Seems there's no progress on this for a while...

ZQun commented 2 years ago

@dgrr Do you have any plan to release stable HTTP/2 adapter ?

G2G2G2G commented 2 years ago

Just give up and add quic / http3.. http2 wasn't that great anyway

https://github.com/lucas-clemente/quic-go

https://interop.seemann.io/

gaby commented 2 years ago

This issue has been open for over 6 years now, is there a concrete plan on how to address this?

erikdubbelboer commented 2 years ago

I'm afraid not. Most features of http2 aren't relevant for the use cases fasthttp is meant for.

cipriancraciun commented 2 years ago

Having experimented in my kawipiko static server based on fasthttp with both HTTP/2 (based on the Go's implementation) and HTTP/3 (based on an experimental available library), I continue to believe that perhaps HTTP/2 and HTTP/3 is a job for some other component of the infrastructure, be it a CDN or even a local HTTP router / load-balancer such as HAProxy.

In my experiments the major issue with Go-based HTTP servers and performance is mainly memory allocation overhead (or as it's called in Go "heap escape"); with fasthttp you can manage to implement a server that has almost zero allocations (although you'll have to profile the heck out of it); with other HTTP implementations (like Go's one) it's harder. Now I imagine that with HTTP/2 and HTTP/3 complexities, any development that also tries to eliminate lots of allocations would be extremely difficult, thus any potential gains from HTTP/2 and HTTP/3 would be diminished by the lost performance due to memory allocation overhead.

kolinfluence commented 1 year ago

http2 is supported by hertz but a lot of things are not compatible. tried porting fasthttp to hertz for 1 week and still working on areas.

they have ideas on http2 which is great but means changing a few sections of fasthttp. however, fasthttp has certain mechanism which makes things more mature. look into hertz.

bryanvaz commented 1 year ago

I'm afraid not. Most features of http2 aren't relevant for the use cases fasthttp is meant for.

@erikdubbelboer, when you get a second, can you quickly clarify just three things for any downstream project that might be waiting on HTTP2 support from fasthttp (I know you've mentioned it sporadically in this and other issues, but I want to centralize and confirm the answers, feel free to just provide a quick Yes/No if I've captured your reasoning correctly):

1) To confirm there is currently no plans to include HTTP/2 support into fasthttp, and most likely never will be, based on the usecase for fasthttp and the new features of HTTP/2 (& even HTTP/3)? 2) The reason that the use case does not align is because fasthttp is meant to be a "fast implementation of HTTP/1.1", and not a general "fast HTTP implementation" supporting all future revisions to the protocol. 3) If anyone wants to leverage the framework of fasthttp but offer HTTP/2 or HTTP/3, should either fork and extend the fasthttp code, or look at some related packages - e.g. dgrr/http2 which is linked on the fasthttp README.

Motivation: HTTP/2 support is brought up about once every 2 months for downstream webserver project, e.g. gofiber/fiber, and in a few other projects. The spike in HTTP/2 interest is mainly due to the increased popularity and adoption of gRPC, and now HTTP/3 & QUIC being enabled by default in all major browsers as of March 2023 (Safari was the last holdout). The standing position within these projects is to wait for fasthttp to implement HTTP/2, at which point those downstream projects would have native support for HTTP/2. In particular, this specific issue (#144) is cited as the "work" that is being done to implement HTTP/2 (which is obviously not true).

If the answer to all 3 questions above is "YES", should we update the README to clarify that that no work is being done on HTTP/2 within the fasthttp project? The README is currently ambiguous and implies that work is being done to integrate HTTP/2 into fasthttp, when in actuality if ever completed, would exist as a separate project (referring to fasthttp/http2) - I only realized this after digging through the http2 project, and all the issues around HTTP/2.

Cheers, Bryan

dgrr commented 1 year ago

Hello @bryanvaz. Thanks for your comment. Supporting HTTP/2 is possible, but it requires a LOT of work to develop an HTTP/2 framework. I did start the protocol myself and later got some help developing some tests and fixing a few things. But later I realized that the project is bigger than myself. It required support for browser engines other than Chromium's (Firefox doesn't work with the http2 library, for example). I did got offered getting paid to do it, but I ultimately decided to not take it. The main reason was because I was out of time. I have projects myself that require some attendance. Nowadays I don't see incentive for myself because I no longer use Golang on a daily basis. And if I ever do, I use I might not use fasthttp, and if I do I use fasthttp for very straightforward services like supporting bid requests from SSPs, etc...

Work can be done to fork net/http2 and adapt it to fasthttp. But then more work will need to be added to the pile to support GRPC, aka forking the compiler to replace the code that generates net/http code with fasthttp code. And what is the actual incentive? A few microseconds that you could save per request? Because memory-wise is going to be mostly the same, you need a goroutine per connection and maybe another gorouting to handle the HTTP/2 streams. If you are using Golang, generally you don't care about microseconds (generally). And... on top of that, who is going to use http/2 without GRPC support? Is it worth the try? To be honest, it took me less time to learn Rust and how frameworks like tokio work than to develop the http2 library (and it is not fully finished!). I'd say that if the community (go-fiber mainly) is willing to develop an http/2 library, PRs are welcome in fasthttp/http2, and also in my repo, but I might not be able to review as fast as other people could. They can also create their own repo. The thing comes down to: If there's interest, someone will do it. If nobody did it, then there's no real advantage with net/http2.

Maybe the explanation is too big, but I think it helps.