Open DrLynch opened 2 months ago
Per comments from ETS we may table this split and focus on other methods for proxy separation. Need to review tools.
For load balancing I found that Nginx itself can do this job, so I decided to use it. To make it work, I added several lines to the nginx.conf file:
1. upstream backend_processes { server localhost:8888; server localhost:8890; } in the http{} section, 8888 and 8890 are the ports that http requests will be balanced to, and there could be more than 2 lines. 2. proxy_pass http://backend_processes; in the location section in server section in http section 3. proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection "Upgrade"; in the location section in server section in http section, this is needed to support WebAPI, so the streams from the user will be be balanced.
In the current version of the system all data and browsing is done through a single process which manages event trapping and logging, data fetch, and dashboard processes. Because it is a single thread model it is bound to a single core which presents a bottleneck for us. In order to address this we need to find a way to benchmark the costly data and dashboard processes and then separate them first to break even recording from dashboard, and second to distribute access across multiple processes. This will require system processes and likely the authoring of a distribution proxy and pool that checks incoming events by type.