Open einthusan opened 10 years ago
You're right.
ServeFiles
doesn't give you access to the http.ResponseWriter
for setting his headers.
It would be nice to have a third parameter (like maxAge uint32
) for ServeFiles
to set the Cache-Control
header.
But you can use the same short logic as in the ServeFiles
method:
package main
import (
"github.com/julienschmidt/httprouter"
"net/http"
)
func main() {
router := httprouter.New()
fileServer := http.FileServer(http.Dir("public"))
router.GET("/static/*filepath", func(w http.ResponseWriter, r *http.Request, p httprouter.Params) {
w.Header().Set("Vary", "Accept-Encoding")
w.Header().Set("Cache-Control", "public, max-age=7776000")
r.URL.Path = p.ByName("filepath")
fileServer.ServeHTTP(w, r)
})
http.ListenAndServe(":8080", router)
}
@WHIT3 Yes, Im already doing something like this for now, but just wanted to raise the issue. Seems like the developer does not have time for this package anymore, not sure if he would even get time to look at a pull request.
Hi @einthusan,
I must admit, that I currently don't have much time for coding as I am studying abroad in the Philippines until next April. This package is still going to be maintained, though.
@WHIT3's solution is probably the best for now. ServeFiles
is only meant as a shortcut. But it might make sense to add another function which allows to pass a Header struct or a handler function as a parameter.
@julienschmidt thanks for the response. yes, currently I am doing a work around. great work thus far.
I need something like that in order to use gzipped assets. Then maybe will be nice to have a more generic way to set it in the API.
My current idea is to provide a second function, which allows to pass a filter / handler / callback function. Something like the following:
func serveFilesFilter(w http.ResponseWriter, r *http.Request, ps httprouter.Params) {
w.Header().Set("Vary", "Accept-Encoding")
w.Header().Set("Cache-Control", "public, max-age=7776000")
}
router.ServeFiles_New("/src/*filepath", http.Dir("/var/www"), serveFilesFilter)
Comments? Naming suggestions?
A ServeFiles
-like function is only used for serving files directly from the provided http.Dir
so:
http.ResponseWriter
's body : a whole access to it can be misleading and error-prone.
I would just provide the http.Header
(given by the w.Header()
function) instead of the whole http.ResponseWriter
.*http.Request
could be useful:
r.Header.Get("Accept-Encoding")
would be interesting if the body was writable, but it's not.httprouter.Params
could be useful: only the filepath
parameter could be used for logging or something like that.Here is my proposition:
func fileCacheSetter(h http.Header, fp string) {
log.Printf("Requested file path: %s\n", fp)
h.Set("Vary", "Accept-Encoding")
h.Set("Cache-Control", "public, max-age=7776000")
}
router.ServeHeadedFiles("/src/*filepath", http.Dir("/var/www"), fileCacheSetter)
Or, even better (for me), you could just provide a "one-line magic" function with a built-in gzipping and cache headers setting for a given max-age
:
router.ServeStaticFiles("/src/*filepath", http.Dir("/var/www"), 7776000)
Please no "one-line magic" functions.. we use the cgo (youtube/vitess) gzip package, not the std go gzip package. What package would httprouter use? better leave that out of the equation. it would be too restrictive to force magic or result in a large API footprint.
Things to consider
"/src/:buster/*filepath"
where buster
might be app versionIt would be better to use
func serveFilesFilter(w http.ResponseWriter, r *http.Request, ps httprouter.Params)
since it will take care of unforeseeable use cases
as for naming, router.ServeFilesFiltered
We all know there are a lot of different file serving situations:
We can't provide an omniscient solution, and because of that, any function like ServeFiles
(without a full control over the response head and body) will always be somehow "magical" and opinionated.
It's exactly like the Go's built-in http.ServeFile
… This function copies the requested file content into the http.ResponseWriter
and redirects /index.html
to /
, with zero configuration.
It's opinionated (but almost useless in production because of the too few features).
So we have 2 options:
ServeFile
-like func, but it will require to make some choices and solve the most common need for a classic static content:
Vary: Accept-Encoding
Cache-Control
with a time setting (and maybe adapted to Content-Type
)ETag
and/or Last-Modified
compress/gzip
, no external dependencies) if Accept-Encoding: gzip
At the end, I think the questions are "What background features this new function needs to provide and what needs to be configurable?" or… "Do we really need a new ServeFiles
function?".
I would say the question is "Do we really need a new ServeFiles
function?" and my answer is yes.
The current one is just too limited, and the workaround is so much more code. Mind you that the workaround is missing some lines of code..
if len(path) < 10 || path[len(path)-10:] != "/*filepath" {
panic("path must end with /*filepath")
}
The workaround feels like too much heavy lifting is required, and complicated for beginners to Go. They just want a simple fileserver to begin with, and later be able to expand their feature set as their application grows.
If we make the last param be optional, using the trick below, i think it would be better as it wouldn't need another name and it wouldn't break backwards compatibility...
func ServeFiles(path string, root http.FileSystem, filter ...Handle)
Why can't we just do something like this? It's so much more simpler, especially if an example as such is provided, it's more easily understandable.
package main
import (
"net/http"
"github.com/julienschmidt/httprouter"
)
func main() {
router := httprouter.New()
// simple file server
router.ServeFiles("/src/*filepath", http.Dir("/var/www"))
// custom file server
router.ServeFiles("/src/*filepath", http.Dir("/var/www"), func(w http.ResponseWriter, r *http.Request, ps httprouter.Params) {
w.Header().Set("Vary", "Accept-Encoding")
w.Header().Set("Cache-Control", "public, max-age=7776000")
})
http.ListenAndServe(":8080", router)
}
Now, if you want more features, the package can provide some defaults.
// custom file server
router.ServeFiles("/src/*filepath", http.Dir("/var/www"), httprouter.BasicFilter(3*time.Hour), httprouter.GzipFilter(9), MyLessFilter())
or more neatly written example,
// custom file server
router.ServeFiles("/src/*filepath", http.Dir("/var/www"),
httprouter.BasicFilter(3*time.Hour),
httprouter.GzipFilter(9),
MyLessFilter(),
)
A patched ServeFiles
seems like a good idea:
func ServeFiles(path string, root http.FileSystem, filter ...Handle)
But could you explain where do you write the response body?
'Cause with…
// simple file server
router.ServeFiles("/src/*filepath", http.Dir("/var/www"))
...the body would be written in the httprouter.ServeFiles
.
But with…
// custom file server
router.ServeFiles("/src/*filepath", http.Dir("/var/www"), httprouter.BasicFilter(3*time.Hour), httprouter.GzipFilter(9), MyLessFilter())
...the body would be written in the MyLessFilter()
and httprouter.GzipFilter(9)
filters (and the order is significant). NOT in httprouter.ServeFiles
, like previously.
With multiple writing situations, the ServeFiles
func will need to always check if the response writer is already used or not.
And if these filters only have the classic 3 arguments (w http.ResponseWriter, r *http.Request, ps httprouter.Params
), you will need to manually open the file from the filepath
param to process it.
Finally, you don't win so much over the actual workaround.
Also, with multiple filters that pass data one to another (CoffeeScript -> UglifyJS -> GZIP for example), we can't directly use the http.ResponseWriter
, we need a middleware system, and "httprouter" becomes "httpmicroframework". :)
Let me know if you need an example or not, but let me try to be quick about it.
You would pass in a "fake" w
which is just a bytes.buffer
. Each filter would be responsible for draining the fake w
using io.Copy
into a temp buffer, and doing their part of the work on it, and then performing another io.Copy
to re-fill the fake w
.
This passing around of the fake w
should allow the package to regain control and ensure that only it has the ability to write to the actual client using the "real" w.
Yes, your correct this means we can't use http.ResponseWriter directly, but it was you who suggested that we have features like Gzip, so eventually even your proposal would lead to the same pattern.
About this becoming a micro framework, I didn't request for built-in features, I just need something so that I can write my own micro framework, and what other way to write my own micro framework than use the technique i suggested?
Oh and your right about order being important, my example is wrong just for the record, it does gzip and cache before less filter lol.
OK, I see.
But my proposal was to just embed gzipping (and cache headers) inside the httprouter package. And for that, there is no need to expose any writer. This pattern would simply lead to:
// 3 simple arguments and all the work in julienschmidt/httprouter:
//
// URL pattern Source directory Max-age
router.ServeStaticFiles("/static/*filepath", http.Dir("/public"), 3*time.Hour)
It's opinionated (but like I said, the http.ServeFile
and httprouter.ServeFiles
themselves are already opinionated: they suppose that index.html
is the root and that you don't want cache, GZIP or others).
In summary:
ServeFiles
func?ServeStaticFiles
func.httprouter is described as "a lightweight high performance HTTP request router". A middleware capability would break that concept.
I just need something so that I can write my own micro framework, and what other way to write my own micro framework than use the technique i suggested?
Just write it on top of httprouter, like Gin has done. No?
@whit3 lol... okay well to be honest, I have been working on a closed source full-featured go web framework for the past 1.5 year, and we did some benchmarks on the gzip packages available, and the cgo based package by youtube/vitess
project was amazingly better. Hence, I got convinced into making sure you didn't get to put go's standard gzip package in here. But of-course, a simple web application wouldn't care and probably just needs something that works out of the box without complications.
I actually don't have any problems at all using the workaround as that is what my web framework is doing, and there wouldn't be much of an impact to me even if this issue wasn't closed. However, it would affect others that need a simple solution, so I'd admit defeat only on that point.
Aside from that, I've been thinking if it would make sense to open-source the framework that I have been working on as a team, but not sure the current Go crowd would understand why there has to be another web framework among many others. But that's beyond the scope of this issue. :)
You win!
By the way, you can probably do something like this.
type StaticFileOption struct {
MimeType string
CacheExpire time.Duration
GzipLevel int
}
And here is the examples
// defaults the cache expiry to what google page speed thinks is acceptable (i think 7 days)
// defaults gzip level to 1
router.ServeStaticFiles("/static/*filepath", http.Dir("/public"))
// for more control, you can do this
// but mime types that are not defined use default behaviour
router.ServeStaticFiles("/static/*filepath", http.Dir("/public"),
StaticFileOption{MimeType: "text/css", CacheExpire: 24 * time.Hour},
StaticFileOption{MimeType: "text/javascript", CacheExpire: 1 * time.Hour},
)
Just another possible way of doing it..
[…] the cgo based package by youtube/vitess project was amazingly better. Hence, I got convinced into making sure you didn't get to put go's standard gzip package in here.
Best GZIP from third-party package vs. basic GZIP from standard library…
For the actual usage of ServeFiles
, no matter, it can't go wrong.
By the way, you can probably do something like this.
type StaticFileOption struct { MimeType string CacheExpire time.Duration GzipLevel int }
Why not, but this structure (which is a good idea) should be named StaticFilesOptions
and be part of httprouter:
router.ServeStaticFiles("/static/*filepath", http.Dir("/public"),
&httprouter.StaticFilesOptions{
MIMEType: "text/css",
CacheExpire: 24 * time.Hour,
},
&httprouter.StaticFilesOptions{
MimeType: "text/javascript",
CacheExpire: 1 * time.Hour,
},
)
Aside from that, I've been thinking if it would make sense to open-source the framework that I have been working on as a team, but not sure the current Go crowd would understand why there has to be another web framework among many others.
I'm also developing 2 frameworks (one for REST APIs, another for full websites) with very modular components.
Let the things happen! Nobody refuses new ideas and visions. :)
But I think we are much to develop routers, custom frameworks and other stuff around net/http
and I feel like open source frameworks in Go are more like sources of inspiration or temporary tools.
Not necessarily because of the quality of these packages, but because that the need for light and custom solutions quickly arise with a language like Go and its stdlib.
@whit3 I was using go playground to ensure the code was correct, but forgot to put back httprouter.
I actually did mean it the way you re-wrote the code (struct inside httprouter).
Let the things happen! Nobody refuses new ideas and visions. :)
The documentation part is being a killer.. and it's still not completed to 1.0 status You can see it in action at http://wojka.com (mobile only website but still works on desktop of-course)..
I'm also developing 2 frameworks (one for REST APIs, another for full websites) with very modular components.
Is your framework that is geared towards full websites open sourced? If so, any links to it?
The documentation part is being a killer…
Always! :)
Is your framework that is geared towards full websites open sourced?
I'm only at the beginning (I develop full-time in Go only since october 2014).
I'm actually using httprouter and not very DRY code. Too much logic is duplicated between my web projects and none of the existent Go (micro)frameworks can satisfy me (for various reasons like no idiomatic code, bad var/func/struct/interface naming, lack of modularity, wrong balance between performance and abstraction, and many others).
That's why I will take time to create the base packages (router, middleware handler, authentication, validations, i18n, CDN-compatible assets management, templates, cookies & sessions, simple real-time with WebSockets, and some others). With these independent packages, I'll compound my 2 homogenous frameworks.
It's a lot of work but when all is done (tested, used in production and documented) it will be open sourced (this year normally, with another GitHub account).
Anyway, what do you think about the actual propositions for this issue @julienschmidt?
none of the existent Go (micro)frameworks can satisfy me (for various reasons like no idiomatic code, bad var/func/struct/interface naming, lack of modularity, wrong balance between performance and abstraction, and many others).
Seriously, I see the EXACT same problem. I also put readability (idiomatic Go & var/func/struct/interface naming) at the top of the list. It's an event based framework, and uses web-sockets as default and fall-back as Ajax. Both client side (javascript) and back-end code is extremely easy to read.
But why re-write code when some great code exists?
@whit3 can you send me an email? I'd like to discuss if your down for that. "einthusan @ paperboardinc.com"
I also agree 100% and would be interested in having an open discussion about this. Keep me in the loop.
Sent from the future
On Jan 25, 2015, at 8:25 PM, Einthusan Vigneswaran notifications@github.com wrote:
none of the existent Go (micro)frameworks can satisfy me (for various reasons like no idiomatic code, bad var/func/struct/interface naming, lack of modularity, wrong balance between performance and abstraction, and many others).
Seriously, I see the EXACT same problem. I also put readability (idiomatic Go & var/func/struct/interface naming) at the top of the list. It's an event based framework, and uses web-sockets as default and fall-back as Ajax. Both client side (javascript) and back-end code is extremely easy to read.
But why re-write code when some great code exists?
router (using httprouter package) template (std go template package) cookies & session (using only the securecookie package from gorilla) websockets (websockets package by gorilla) @whit3 can you send me an email? I'd like to discuss if your down for that. "einthusan @ paperboardinc.com"
— Reply to this email directly or view it on GitHub.
Just want to weight in that I like @einthusan 's solution if only because having proper composition is much more future proof than any magic code. The minimalistic argument also do not hold as I don't see any difference in minimality between the 2 approaches, in fact, the magic approach is less minimal since that'd mean the ServeFiles
function would be doing more than just serving files. It will become a black box that you cannot change and then if you really need to change it, then you'd be required to fork this project or re-implement everything instead of just swapping out the buggy part.
I don't think it's a good idea to idiot-proof an API but lose composition in the process. Sensible defaults is not that hard use even for newbie. I'd say that if anyone can't figure out the how of einthusan's approach he/she shouldn't be touching any routing code at all.
And that the opinionated solution is very un go-like, but that might be just me.
Related question: I just implemented gzip compression in my project, but since I use both httprouter.Handle and http.Handler (static files, r.NotFound), I had to duplicate the code a bit:
func C(h httprouter.Handle) httprouter.Handle {
return func(w http.ResponseWriter, r *http.Request, p httprouter.Params) {
w.Header().Add("Vary", "Accept-Encoding")
if !strings.Contains(r.Header.Get("Accept-Encoding"), "gzip") {
h(w, r, p)
return
}
w.Header().Set("Content-Encoding", "gzip")
gz := gzip.NewWriter(w)
defer gz.Close()
h(&gzipResponseWriter{Writer: gz, ResponseWriter: w}, r, p)
}
}
func Compress(h http.Handler) http.Handler {
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
w.Header().Add("Vary", "Accept-Encoding")
if !strings.Contains(r.Header.Get("Accept-Encoding"), "gzip") {
h.ServeHTTP(w, r)
return
}
w.Header().Set("Content-Encoding", "gzip")
gz := gzip.NewWriter(w)
defer gz.Close()
h.ServeHTTP(&gzipResponseWriter{Writer: gz, ResponseWriter: w}, r)
})
}
Is there any way I could make this look prettier given the current circumstances?
why do you need to use two different ones.. why not just httprouter.Handle?
That's what I thought at first, too, but I couldn't seem to pass a http.FileServer to it as such:
r.NotFound = C(http.FileServer(http.Dir("../static"))).ServeHTTP
use an anonymous func wrapper inline.
Or... can you tell me what your case scenario is?... what are you trying to accomplish in the bigger picture?
I'm fairly new to Go, so I'm sorry if it's an easy problem that I'm over-complicating. Both input and output types of C are httprouter.Handle, so I would need both an inline func wrapper in the call to C (http.Handler -> httprouter.Handle) and then wrap the call to C itself (httprouter.Handle -> http.HandleFunc)? If I return httprouter.Handle, I can't use ServeHTTP, right?
@Puffton I tried all sorts of way to make the code simpler... it just doesn't work... your original solution is the best it can be. I really don't like this new change where the NotFound was changed from httprouter.Handle to http.Handle... @julienschmidt can you explain what this is about... it makes really bad code from my standpoint.
Still an issue in 2017 -- for all practical use cases, I cannot simply use http.FileServer
as is because it is woefully inadequate, I always have to write my own three-liner "magic function" (as they are called here) because I want to do something "extreme" like set a custom header for every response.
This is not good software design, and I question whether as a primarily offline-only native software developer, I have the ability to write code in my own custom HTTP response handlers that is even nearly as secure or robust as the original implementation written by the library authors themselves.
WoW Just ran into this too; the lack of appropriate cache control headers and gzip is a bit disheartening :/ now I have to figure out what magic everyone else has done to get around this lacking builtin support!
Bump
I am not sure how to do this... is there support for this? It seems the API does not allow for such thing.