hackoregon / civic-devops

Master collection point for issues, procedures, and code to manage the HackOregon Civic platform
MIT License
11 stars 4 forks source link

Switch web servers in the Django containers #189

Closed MikeTheCanuck closed 4 years ago

MikeTheCanuck commented 6 years ago

As Nathan succinctly puts it in #183,

Do we have a NGINX server? I've been wondering about this.

Currently we serve the Django service from Django's builtin UWSGI server, but this is not recommended for production use. Recommendation is to use the builtin UWSGI server for serving the dynamic content, with a NGINX server as reverse-proxy upfront, handing connections to the UWSGI server for response, and having the NGINX server handle all the static assets itself. As far as I know this reduces the load on, and performs better, under load.

Both Django and NGINX have functionality to enable this recommended production setup.

I also wonder if the lack of the NGINX server may have anything to do with our Django-DB persistent idle connection issues.

At any rate our setup is not a recommended setup for production use, as described by Django itself.

I know the answer will probably be not enough resources and bandwidth (human kind I mean) to accomplish, but is there any desire to move our nonstandard production setup to a recommended method? I've done it before on my own projects.

Maybe something we could tackle before or during next season?

nam20485 commented 6 years ago

I am wondering if we could achieve this from inside the backend container...

Sounds like I have a project to investigate!

On Sat, Jun 23, 2018, 12:51 PM Mike Lonergan notifications@github.com wrote:

As Nathan succinctly puts it in

— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/hackoregon/civic-devops/issues/189, or mute the thread https://github.com/notifications/unsubscribe-auth/ABvqFJYnYPhbYykZd1RshkcPkV84UedLks5t_pwygaJpZM4U06Rv .

MikeTheCanuck commented 6 years ago

Thankfully we have [since discovered]() that the source of the persistent, idle, zombied database connections was due to a faulty "pooling" driver that didn't reuse connections on subsequent requests to our workers.

But definitely the point remains that now that we've stabilized our Django apps two years in a row, we could re-examine the use of gunicorn as our web server and see what the right choice would be for us.

Acceptance Criteria

Having experienced the pain first hand of troubleshooting the difference between "works on my dev box!" and how things operate in an AWS (ALB, ECS, NAT Gateway, internet/backbone-routed-database-traffic) environment, I will expect that any wholesale upgrade of our containers would be predicated on sustained testing in production

bhgrant8 commented 6 years ago

So, to add a bit to the current configuration, we are using the whitenoise package to help handle the hosting of our relatively small swagger frontends/static assets

Whitenoise has been specifically configured to be a relatively lightweight and enforces some best practices, such as caching, minification, CORS headers, and is fairly well established as an alternative in small/medium size production applications (many of which we would want to configure/maintain through nginx):

http://whitenoise.evans.io/en/stable/index.html#what-s-the-point-in-whitenoise-when-i-can-do-the-same-thing-in-a-few-lines-of-apache-nginx-config

Let's be clear about the benefits which we are looking to gain if we were to move to a different server setup as well as what we would loose/need to manually setup.

Continuing to use whitenoise but working with a CDN may be an option if our main question is server load/performance?

+1 on defined performance testing as well