Closed edrex closed 14 years ago
Hello edrex,
I'm not able to reproduce the problem.
Look here under I've made 10000 requests and 500 in parallel.
I'm using epoll as backend, python 2.6, lastest release of Fapws3 in Master and libev 3.8
" Server Software: fapws3/0.6 Server Hostname: 127.0.0.1 Server Port: 8080
Document Path: /short Document Length: 12 bytes
Concurrency Level: 500 Time taken for tests: 3.017 seconds Complete requests: 10000 Failed requests: 0 Write errors: 0 "
hmm I'm not able to reproduce the issue now either
This is a parameter that is closely linked to your kernel's parameters. Feel free to tune it accordingly.
I was running benchmarks of various async web frameworks ala http://brizzled.clapper.org/id/88.html.
I was using examples/hello and hitting http://localhost:8080/short (which serves a small file from disk via a standard blocking open() call) with 50 concurrent connections via ab. What of course happens is that the process quickly runs out of file descriptors.
I wonder if the authors of this software have ideas about how to do file io in a non-blocking way from within a fapws3 server? Node.js (which I was also benching) uses libeio with a thread pool to prevent io from blocking the main thread, and handles large concurrent loads easily. I haven't found anything similar for python, but I feel like it's probably out there somewhere.