xplicit / HyperFastCgi

Performant nginx to mono fastcgi server
MIT License
129 stars 49 forks source link

"Sent unsupported FastCGI protocol version" error with keepalive #55

Closed Ustimov closed 8 years ago

Ustimov commented 8 years ago

I was testing my ServiceStack web service with httperf tool and periodically was getting "sent unsupported FastCGI protocol version" error. Here my error log:

2016/05/04 20:27:12 [error] 23191#0: *18 upstream sent unsupported FastCGI proto
col version: 0 while reading upstream, client: 195.19.44.143, server: localhost:
8000, request: "GET /db/api/thread/list?forum=hgezdt6ufw&limit=52&order=desc&thr
ead=374 HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "127.0.0.1"
2016/05/04 20:27:18 [error] 23191#0: *25 upstream sent unsupported FastCGI proto
col version: 0 while reading upstream, client: 195.19.44.143, server: localhost:
8000, request: "GET /db/api/post/list?post=976184&limit=75&order=asc&forum=qz72 
HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "127.0.0.1"
2016/05/04 20:27:19 [error] 23191#0: *26 upstream sent unsupported FastCGI proto
col version: 0 while reading upstream, client: 195.19.44.143, server: localhost:
8000, request: "GET /db/api/thread/listPosts?limit=66&order=desc&thread=5651 HTT
P/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "127.0.0.1"
2016/05/04 20:27:26 [error] 23191#0: *39 upstream sent unsupported FastCGI proto
col version: 0 while reading upstream, client: 195.19.44.143, server: localhost:
8000, request: "GET /db/api/forum/listThreads?related=user&related=forum&limit=3
9&order=asc&forum=mvsf6q HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: 
"127.0.0.1"
2016/05/04 20:27:32 [error] 23191#0: *49 upstream sent unsupported FastCGI proto
col version: 0 while reading upstream, client: 195.19.44.143, server: localhost:
8000, request: "GET /db/api/thread/listPosts?limit=75&order=desc&thread=1556 HTT
P/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "127.0.0.1"

After some research I've found what appearing of the error depends on enabled or disabled keepalive.

Here my Nginx config:

upstream fastcgi_backend {
  server 127.0.0.1:9000;
  #keepalive 32;
}
server {
  listen 8000;
  server_name localhost:8000;
  error_log /var/log/nginx/Forum.Error.log;

  location / {
    root /root/forum/Forum/;
    index index.html index.htm default.aspx Default.aspx;
    fastcgi_index Default.aspx;
    #fastcgi_keep_conn on;
    fastcgi_pass fastcgi_backend;
    include /etc/nginx/fastcgi_params;
  }
}

So, if I will remove # before commented lines I will get this errors. Otherwise, running with --keepalive=false switch everything is alright.

If you need some additional information, I'll be glad provide it for you.

xplicit commented 8 years ago

Can you post a reproducible test case?

Ustimov commented 8 years ago
  1. All actions tested in clean Ubuntu 14.04.4.
  2. Download a database dump: wget https://www.dropbox.com/s/6tkw43o97ptoblk/forum.tar.gz?dl=0.
  3. Extract the dump: tar -xzf forum.tar.gz.
  4. Install MySQL: sudo apt-get install mysql-server mysql-client.
  5. Create a database and an user:
mysql -u root -p
create database forum;
create user 'admin'@'localhost' identified by 'admin';
grant all privileges on forum.* to 'admin'@'localhost';
exit
  1. Restore a database from the dump (long running operation, about 20-30 mins): mysql -u root -p forum < forum.sql
  2. Install mono, git and HyperFastCgi:
sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv-keys 3FA7E0328081BFF6A14DA29AA6A19B38D3D831EF
echo "deb http://download.mono-project.com/repo/debian wheezy main" | sudo tee /etc/apt/sources.list.d/mono-xamarin.list
sudo apt-get update
sudo apt-get install mono-complete
sudo apt-get install git
git clone https://github.com/xplicit/HyperFastCgi.githyperfastcgi4 `--config=server.config`
sudo apt-get install autoconf automake libtool make libglib2.0-dev libevent-dev
cd HyperFastCgi/
./autogen.sh --prefix=/usr
 make
 sudo make install
  1. Clone my project and build:
git clone https://github.com/Ustimov/Forums.git
cd Forums/
sudo apt-get install nuget
nuget restore
xbuild /p:Configuration=Release
  1. Install nginx: sudo apt-get install nginx
  2. Configure nginx and HyperFastCgi: create file sudo nano /etc/nginx/sites-available/forum and past (set paths according to your environment):
upstream fastcgi_backend {
  server 127.0.0.1:9000;
  keepalive 32;
}
server {
  listen 8000;
  server_name localhost:8000;
  error_log /var/log/nginx/Forum.Error.log;

  location / {
    root /root/Forums/Forum/;
    index index.html index.htm default.aspx Default.aspx;
    fastcgi_index Default.aspx;
    fastcgi_keep_conn on;
    fastcgi_pass fastcgi_backend;
    include /etc/nginx/fastcgi_params;
  }
}

create symlink to forum file

rm /etc/nginx/sites-enabled/default
ln -s /etc/nginx/sites-available/forum /etc/nginx/sites-enabled/forum
sudo service nginx restart

create server.config with this content (set paths according to your environment):

<configuration>
        <server type="HyperFastCgi.ApplicationServers.SimpleApplicationServer">
                <!-- Host factory defines how host will be created. SystemWebHostFactory creates host in AppDomain in standard ASP.NET way --> 
                <host-factory>HyperFastCgi.HostFactories.SystemWebHostFactory</host-factory>
                <!-- <threads> creates threads at startup. Value "0" means default value --> 
                <threads min-worker="40" max-worker="0" min-io="4" max-io="0" />
                <!--- Sets the application host root directory -->
                <!-- <root-dir>/path/to/your/dir</root-dir> -->
        </server>
        <listener type="HyperFastCgi.Listeners.NativeListener">
                <apphost-transport type="HyperFastCgi.Transports.NativeTransport">
                        <multithreading>Single</multithreading>
                </apphost-transport>
            <protocol>InterNetwork</protocol>
            <address>127.0.0.1</address>
            <port>9000</port>
        </listener>
    <apphost type="HyperFastCgi.AppHosts.AspNet.AspNetApplicationHost">
                <log level="Debug" write-to-console="true" />
                <add-trailing-slash>false</add-trailing-slash>
    </apphost>
    <web-applications>
        <web-application>
                <name>Forum</name>
                <vhost>0.0.0.0</vhost>
                <vport>8000</vport>
                <vpath>/</vpath>
                <path>/root/Forums/Forum/</path>
        </web-application>
    </web-applications>
</configuration>
  1. Run application: hyperfastcgi4 --config=server.config
  2. Clone tests:
git clone https://github.com/s-stupnikov/technopark-db-api.git
cd technopark-db-api/tests/
nano perf_test.py 
change CONFIG_PATH = '/usr/local/etc/test.conf' to CONFIG_PATH = '../conf/test.conf'
  1. Download a test scenario: wget https://www.dropbox.com/s/eg5o1edz8y216j7/ustimov_httperf_scenario?dl=0.
  2. Install httperf and run tests:
sudo apt-get install httperf
httperf --hog --client=0/1 --server=127.0.0.1 --port=8000 --uri=/ --send-buffer=4096 --recv-buffer=16384 --add-header='Content-Type:application/json\n' --wsesslog=100,0.000,ustimov_httperf_scenario

P.S. With clean installation I doesn't get the same error (I use Debian before), but now I've get:

2016/06/04 14:00:25 [error] 1633#0: *77 upstream sent too big header while reading response header from upstream, client: 127.0.0.1, server: localhost:8000, request: "GET /db/api/user/listPosts?limit=29&user=amizy%40ua.ru&order=asc HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "localhost"
2016/06/04 14:00:25 [error] 1633#0: *79 upstream sent too big header while reading response header from upstream, client: 127.0.0.1, server: localhost:8000, request: "GET /db/api/forum/listPosts?related=forum&related=thread&limit=25&order=asc&forum=1lnro HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "localhost"
xplicit commented 8 years ago

"too big header" error message is unrelated to the error described in header of the issue and is expected behaviour of nginx.

Nginx has limitation of size for fast_cgi response headers, so if you want to send big headers in responses from your application you should increase these limits. There are settings fastcgi_buffer_size and fastcgi_buffers, for example.

             fastcgi_buffer_size 512k;
             fastcgi_buffers 512 4k;

The numbers in sample are pretty huge, so for real application you should tweak them to your needs. Here is an simple case how to tweak buffers: https://easyengine.io/tutorials/nginx/tweaking-fastcgi-buffers/

The second point that in your case you try to perform two mutually exclusive operations: profile the service and compute a performance. If you want to check performance of the service, you should disable MiniProfiler by commenting two lines here
https://github.com/Ustimov/Forums/blob/master/Forum/Global.asax.cs#L54 https://github.com/Ustimov/Forums/blob/master/Forum/Global.asax.cs#L59

because at every request mini profiler adds some info to the headers and after large number of requests headers become several megabytes in size. Measuring performance of few-bytes body content with megabyte headers does not make sense.

If you want to profile the service to see the bottlenecks then run limited number of requests and analyze data returned by ServiceStack.

If you meet "Sent unsupported FastCGI protocol version" again, please feel free and open new issue.