Open stuartlynn opened 5 years ago
This is a great idea. The enormity of MapPLUTO has made this database a bit hard to work with, even with shp2pgsql
it takes an hour to import. For this reason I figured maybe just the frontend and API code should be launched together, and the database should live separately, but I don't know if that makes sense either. Totally open to your suggestion!
We could load the database in once using shp2pgsql then use pgdump to make a dump of the DB to pull down locally for folks to work on. That should restore pretty quickly and I can even set up the docker-compose file to load it in automatically on boot.
I worry about developing against a live API, I have seen databases get pretty messed up that way. Will reach out to you twitter to get the live connection string and try doing the dump + load locally
In theory we only need to back up the much smaller colors
table, the mappluto
table is read-only so we can load it from the published source: https://www1.nyc.gov/assets/planning/download/zip/data-maps/open-data/nyc_mappluto_18v2_1_shp.zip
If a new developer can load mappluto
using shp2pgsql and then an older copy of colors
, that should work.
Currently, the postgis database, fronted and backend all have to be started independently. I would like to propose we wrap them all up in a docker-compose.yml so development is as easy as
docker-compose up
I have an almost working set up but curious if you think this is a good way to go?