silvioprog / brookframework

Microframework which helps to develop web Pascal applications.
https://github.com/risoflora/brookframework
GNU Lesser General Public License v3.0
171 stars 37 forks source link

Brookframework with sagui (legacy) #164

Closed Al-Muhandis closed 5 years ago

Al-Muhandis commented 5 years ago

These errors do not appear on the demo project. Only in the working mode. Legacy apps are quite gluttonous (memory and CPU) compared to the good old Brook 3. Also there is a memory leak strongly enough. Of course, it can be in my code, but I have not yet found where the leak occurs.

silvioprog commented 5 years ago

Hello @Al-Muhandis , I took a look at the following relevant log line you posted:

...
Hit process or system resource limit at 1 connections, temporarily suspending accept(). Consider setting a lower MHD_OPTION_CONNECTION_LIMIT.
...

It is very recommended to change the following properties to deploy an application in production:

TBrookHTTPServer.ConnectionLimit -> Limit of concurrent connections.
TBrookHTTPServer.ConnectionTimeout -> Inactivity time to a client get time out.
TBrookHTTPServer.ThreadPoolSize -> Size for the thread pool (on Linux, you can get it via "$ getconf _NPROCESSORS_ONLN").
TBrookHTTPServer.Threaded -> If True, the server creates one thread per connection.

These three properties above influence the performance and capacity of your server. For example, I've used the following configuration in our customers:

BrookHTTPServer1.ConnectionLimit := 1000; // Change to 10000 for C10K problem - http://www.kegel.com/c10k.html
// BrookHTTPServer1.ConnectionTimeout // Not required for the test
BrookHTTPServer1.ThreadPoolSize := GetCPUCount; // Get the available online CPUs
BrookHTTPServer1.Threaded := False; // Disabled, since we are using the thread pool

I've used the same configuration to do benchmarkings. The GetCPUCount is a small function to get the available online CPUs, something like this.

I have another great suggestion for you. It is very recommended to do some profiling in our server in production. This link contains the tools and command lines to do some profiling simulating a congestion situation with multiple incoming requests simultaneously. I've used ab, wrk and JMeter.

Let me know about your tests and if the tools could help you. :-)

silvioprog commented 5 years ago

Hello @Al-Muhandis , did you solve this problem? :-)

Al-Muhandis commented 5 years ago

Hello, @silvioprog ! Sorry, I was on a trip. Just got home today. I'll try to use your advices later. Thank you for your valuable comments!

Al-Muhandis commented 5 years ago

Yes, it is works. Some notes:

  1. Application.Server.ThreadPoolSize=2; is not working for me. Although nproc gives 2 on my Debian. Maybe the matter is that it is cloud VPS?
  2. Application.Server.ConnectionTimeout where can I find the default value?
silvioprog commented 5 years ago
  1. Application.Server.ThreadPoolSize=2; is not working for me. ...

Some error?

... Although nproc gives 2 on my Debian.

Theoretically, any ThreadPoolSize value bigger than 1 works properly. :-)

... Maybe the matter is that it is cloud VPS?

Usually, the company which deliver the VPS service provides the CPU numbers in a "service features" window. (An example here)

  1. Application.Server.ConnectionTimeout where can I find the default value?

Hm... indeed it should be documented (I'll fix this issue soon). The default value is ConnectionTimeout := 0, i.e., infinity timeout.

Al-Muhandis commented 5 years ago

Some error?

HTTP 502

Usually, the company which deliver the VPS service provides the CPU numbers in a "service features" window. (An example here)

Yes, Hetzner about CX40 also writes: 2 core. But my server is using nginx, maybe that's it.

  1. Application.Server.ConnectionTimeout where can I find the default value? Anyway, it's not that important to me.

Hm... indeed it should be documented (I'll fix this issue soon). The default value is ConnectionTimeout := 0, i.e., infinity timeout.

Thank you for the answer!

silvioprog commented 5 years ago

HTTP 502

Maybe the Nginx logs reveals more details. :-)

But my server is using nginx, maybe that's it.

Hm... is your Brook application running as reverse proxy?

You can improve Nginx doing some changes in the nginx.conf, for example, take a look at this link. These options are very required to make Nginx able to process many requests.

Thank you for the answer!

You're welcome dude! :-)

Al-Muhandis commented 5 years ago

Maybe the Nginx logs reveals more details. :-)

Like this? 2018/12/26 22:48:04 [error] 15125#15125: *3359847 connect() failed (111: Connection refused) while connecting to upstream, client: x.x.x.x, server: sample.com, request: "GET / HTTP/1.1", upstream: "http://127.0.0.1:8085/", host: "ww.sample.com"

Hm... is your Brook application running as reverse proxy?

My app works like 127.0.0.1:8085 and through nginx works with users

    location / {                
        location ~* ^.+\.(jpg|jpeg|gif|png|svg|js|css|mp3|ogg|mpe?g|avi|zip|gz|bz2?|rar|swf|txt)$ {
            try_files $uri $uri/ @fallback;
            expires 1d;
        }
        proxy_pass http://127.0.0.1:8085;
    }

You can improve Nginx doing some changes in the nginx.conf, for example, take a look at this link. These options are very required to make Nginx able to process many requests.

Already the same in my nginx

user www-data;
worker_processes  2;

error_log  /var/log/nginx/error.log warn;
pid        /var/run/nginx.pid;

events {
    worker_connections  1024;
}

http {
    include       /etc/nginx/mime.types;
    default_type  application/octet-stream;

... ... ... ... ..

    sendfile        on;
    #tcp_nopush     on;

    keepalive_timeout  65;
... ... ... ...
Al-Muhandis commented 5 years ago

Anyway, thanks for the advice! I used them. If necessary the topic can be closed

silvioprog commented 5 years ago

Interesting. I've used reverse proxy some years ago on FastCGI. This way in pure HTTP seems awesome too.

Some tips: if your Nginx is 1.9.10 or higher, change:

#worker_processes  2;
worker_processes auto;

add:

worker_cpu_affinity auto;

and change:

events {
#    worker_connections  1024;
    worker_connections 10000;
}

If you don't need an access log, it can be disabled via access_log off; at section http.

The worker_processes auto facility is very good on Nginx. It would be nice to use something like this in Sagui library:

sg_httpsrv_set_thr_pool_size(srv, 0); // Use zero to choose the value automatically.

I'll check this possibility and open some issue tagged as "feature request". :-)

Anyway, thanks for the advice! I used them. If necessary the topic can be closed

You're welcome. :-) Did you solve the problem? If so, please close if it is ok.

Al-Muhandis commented 5 years ago

add:

worker_cpu_affinity auto;

and change:

events {
#    worker_connections  1024;
    worker_connections 10000;
}

Thanks. I will try

You're welcome. :-) Did you solve the problem?

I didn't localize the memory leak, but I think the framework had nothing to do with it.

Interesting. I've used reverse proxy some years ago on FastCGI. This way in pure HTTP seems awesome too.

Work with static files is better to charge nginx, I think it will cope better. And besides, I don't know how to configure the standard HTTP port and the internal custom port of my application.