vibe-d / vibe.d

Official vibe.d development
MIT License
1.15k stars 284 forks source link

Zero-copy file serving #143

Open vizanto opened 11 years ago

vizanto commented 11 years ago

After browsing through the source: fileserver.d -> server.d -> core/net.d -> drivers/libev.d -> stream.d

I noticed files are written using writeDefault() which always buffers in 64K. That's soo 1990's ;-)

When gzip isn't required (for example when data is already gzipped on disk) the OS kernel should just do the network transfer. Basically I'd like fileserver.d do something like this: http://wiki.nginx.org/HttpGzipStaticModule without userspace buffering.

s-ludwig commented 11 years ago

Left to do:

jkm commented 11 years ago

I'm looking into the zero copy issue. The comment says it doesn't work on Windows. Currently I'm trying to reproduce. I'm on Linux only.

s-ludwig commented 11 years ago

If I remember well Linux had different symptoms, but ultimately it didn't work either (when I made that comment, I just tested on Windows).

jkm commented 11 years ago

Then I just run httperf and see whether it reports any errors? I just need a starting point. httperf reports also errors for unchanged HEAD.

jkm commented 11 years ago

It seems the errors are from httperf. I need to raise the number of open file descriptors. I'm looking into it.

s-ludwig commented 11 years ago

Sorry, I don't remember exactly, but I think that I didn't even get a single file to be delivered using that code. On Windows it simply crashed and on Linux I'm not sure, but it didn't work either. If it does work now, maybe something has changed in the latest libevent version?

I can try again on Windows in the coming days but won't have a Linux box available for the next two weeks.

jkm commented 11 years ago

No problem. It looks like the code works here. I'm running libevent 2.0.21 on Debian. I tested with httperf and ab. Both give errors when I execute more than 1500 requests per seconds. But this happens independent of zero copying. I.e. it is unrelated. But is this to be expected? At 1000 requests I get no errors and some speed up.

s-ludwig commented 11 years ago

1500 requests per second or 1500 concurrent requests (i.e. "ab -c 1500 ...")? In the latter case, at least on Ubuntu getting errors would be normal, as the "ulimit" is somewhere at 1000 requests.

I'll retry on Windows with 2.0.21.

jkm commented 11 years ago

Sorry. I meant concurrent requests (-c 1000). But I configured my system to allow more file descriptors per process.

$ ulimit -n
200000