Current deployment link: https://nsf-prototype.netlify.app
This project, in its current state, serves as a proof of concept to investigate how city planners could potentially identify wrongful evictions at CARES Act properties in the metro Atlanta area.
Navigate to the project board by clicking on the projects tab at the top of this page. You can also follow this link.
You must be invited in order to view the prototypes.
See process.md
for additional details on important decisions made through the development and design process, as well as the justifications for those decisions.
[!IMPORTANT] Additional documentation about each service can be found in the corresponding subdirectory's README files (
/db
,/client
,/server
).
The client-side of the prototype is written in React using Vite.
Frontend code can be found in the client
directory.
The API for the prototype is served by FastAPI, an API framework for Python.
All code related to the backend is located in the server
directory.
At the core of this project is the geospatial database that powers the user's operations on eviction data. PostGIS, an extension on top of PostgreSQL, is responsible for this layer.
Database-related files are located in the db
directory.
Before continuing, clone this repository. It is strongly recommended that you go through the following sections in the order they are written, as the frontend is dependent on the backend, and the backend is dependent on the database.
If you intend on using Docker for any local setup steps, install Docker Desktop on your machine.
You have 2 options in setting up the database. You can either 1. utilize Docker and use the prewritten Docker Compose service, or you can 2. locally install PostgreSQL and PostGIS. If you already have PostgreSQL installed, doing option 2 is recommended. Otherwise, consider following option 1.
From the root of the repository, run:
docker compose -f docker-compose-dev.yml up --build
This should build and run your database service.
Test your connection using a tool like DataGrip or pgAdmin. Ensure that your database is named nsf-prototype
and that you have tables named cares
and counties
.
Install PostgreSQL if you haven't already.
Create a database with the name nsf-prototype
.
Follow the PostGIS installation guide to extend the nsf-prototype
database with geospatial capabilities.
Navigate to db/seed
and run the following to seed your new database:
psql -d nsf-prototype -f dump.sql
Test your connection using a tool like DataGrip or pgAdmin. Ensure that your database is named nsf-prototype
and that you have tables named cares
and counties
.
For the backend, it is recommended that you manually set up the backend environment instead of using Docker.
Install Python 3.12.
Navigate to the server
directory.
Run pip install -r requirements.txt
. This should install all backend dependencies. If you'd like, you can run this command after creating a venv.
Create a file named .env
, and paste in the following contents:
DB_URL=postgresql://postgres@localhost/nsf-prototype
Replace the credential fields as necessary, following the URI schema
postgresql://[username[:password]@][host[:port],]/database
Run uvicorn src.main:app --reload --port 8443
. Your backend API should now be listening to requests on localhost:8443.
It is recommended that you manually set up the client-side environment instead of using Docker. Generally, performance and hot module reload seem to be much better when Docker isn't used to serve the frontend on your local machine.
Install node 18.
Navigate to the client
directory.
Run npm install
.
Create a file named .env.local
within the client
directory. Copy the snippet below and paste it into the file.
VITE_BACKEND_URL=http://localhost:8443
Run npm run dev
. Vite should spin up a development server for you. Follow the localhost URL that Vite provides you to view the website.
There are three major steps to properly deploying all aspects of this service to the cloud.
As this prototype is currently a proof of concept, there are no robust pipelines for maintaining cloud "production" environments. As such, existing deployments were set up using personal accounts on various cloud platforms. During the prototyping stage, deployments require manual work such as pulling the newest changes from the remote repository on the VM and re-launching the service. Similarly, there is currently no mechanism to log issues as they occur in real-time.
Because the previous deployments to this prototype were performed ad hoc, it is necessary to go through all the deployment steps again on a separate account. The accounts across multiple cloud providers (Cloudflare, AWS, Netlify) responsible for hosting the existing deployments will be inaccessible to you.
To properly query data on the frontend from your deployed backend environment, a custom domain is required to proxy and encrypt traffic between your frontend and backend endpoints.
Currently, a personal domain has been set up for this purpose using Cloudflare DNS, with an A
DNS record pointing the nsf
subdomain to public IP address of the backend VM deployment. Ensure that the traffic is proxied and encrypted using SSL. Without this step, requests made to your backend server will not use HTTPS. This can prevent the website from working as intended.
Currently, the database and backend in the cloud environment live together on the same virtual machine, meaning that they share the same compute resources. Follow the steps below to deploy the database and the backend:
Set up a virtual machine on a cloud provider with at least 2 CPU cores and 4GB of memory. You can consider using AWS EC2, GCP Compute Engine, DigitalOcean Droplets, etc. To ensure compatibility, ensure that the operating system is Ubuntu running on x86.
Ensure that port 8443 is open to external traffic. This is the port that your frontend will use to hit your API endpoints.
SSH into your cloud environment and install the Docker Engine using apt
.
Clone this repository on your virtual machine.
Navigate to the server
directory.
Run the following command:
openssl req -x509 -newkey rsa:4096 -nodes -out cert.pem -keyout key.pem -days 365
You should now see files cert.pem
and key.pem
in your server
directory. These are important in ensuring an SSL-encrypted connection (HTTPS) to your API.
Navigate back to the root directory of this project. Run the following command.
docker compose -f docker-compose.yml up -d --build
You should now be serving the backend API and database on your virtual machine.
At the time of writing, the frontend is hosted on Netlify for free as part of their free tier. There are many other services that offer the same guarantees as Netlify, such as Vercel, Render, etc. When making your own deployments to the frontend, ensure that your environment variables are referencing the production deployment of your backend.