Closed vincerubinetti closed 1 month ago
Name | Link |
---|---|
Latest commit | 22085612d96670b2608caec5ac14efebd224dc8a |
Latest deploy log | https://app.netlify.com/sites/exploring-cancer-in-colorado/deploys/669e829f27f01e00088faad2 |
Deploy Preview | https://deploy-preview-82--exploring-cancer-in-colorado.netlify.app |
Preview on mobile | Toggle QR Code...Use your smartphone camera to open QR code link. |
To edit notification comments on pull requests, go to your Netlify site configuration.
Hey @vincerubinetti, about mapshaper: do you remember the preferences you selected when you simplified and compressed the geometry? Alternatively, could you upload the simplified shapefiles rather than the GeoJSON files?
I ask because I was intending to bring in the shapefiles and district data in the data processing pipeline rather than as an ad-hoc step, so I'd like to be able to at least merge the district representative data when it changes. (I doubt the geometry will change often, so I could just rely on your simplified shapefiles versus simplifying the source shapefiles in the pipeline.) I could use your GeoJSON as the source of the geometry and just rewrite the district reps data in it, too, if you prefer that.
I'll upload the shape files in my next comment, but for posterity here is what I did:
I believe I used the default "Visvalingam / weighted area" option. Also, I just noticed the "prevent shape removal" option, which I probably also should've checked (I assume it avoids removing shapes when simplifying them would make their area 0). Then I think I did 10% on the simplification amount slider, which looked visually okay to me while still being a decent size (1 MB as geojson I think?).
Critically, I also had to change the projection to world coordinates (go to the "console" tab and run -proj wgs84
), because otherwise it would output in some kind of local coordinates (e.g. 3196966, 1954575
) instead of normal long/lat.
Also FWIW if you did want to include this in the pipeline, it looks like "mapshaper" is a powerful geo-data manipulation library with a CLI.
Simplified shape files:
Cool, thanks for the datafiles. I apologize for this, but after I asked I realized it would just be easier to load your frontend JSON files into the database, so I just did that. Once I implement the pipeline I'll use those shapefiles as well as fresh district info, so it's good to have them. I'll also check out mapshaper
; I've been using ogr2ogr
which has some support for reducing the complexity of geometry, but perhaps mapshaper
does a better job.
Regarding the latest commits:
LocationCategory
and Location
models, which capture the contents of your location.json
and location-data.json
files../run_stack.sh down -v ; ./run_stack.sh
. (FWIW, this is addressed in PR #85, so you won't have to manually do that for too long.)/locations/
: returns a dict of the form {<category_name>: {<location_name>: <location_id>, ...}, ...}
, which should match the format of your locations.json
file/locations/<location_id>
: returns a GeoJSON FeatureCollection
for the requested location_id
; each element matches the structure of the elements in location-data.json
, with the top-level keys being replaced by the <location_id>
parameter in the path./locations/by-category/<category_id>
: this isn't needed in the current implementation, but this returns a list of locations under the category, e.g. the category ID "cancer-care"
would return results for "mammography", "lung-cancer-screening", etc. The category_id
paremeter is a slugified version of the category name, e.g. "Cancer Care" becomes cancer-care
.I anticipate it should be straightforward to integrate it into the frontend since it mostly follows the existing structure, but let me know if you run into issues.
I've implemented the LocationCategory and Location models, which capture the contents of your location.json and location-data.json files.
I was wondering if we were going to put locations.json "metadata" into the backend as well. I thought it'd be nice to be able to change the human-readable label of categories on the frontend (in a map of location label/name to backend location id) without having to update the backend. Then again, we have the human readable labels for all the other data in the backend as well, so this is probably best for consistency.
I'll load the locations "metadata" once on page load. This all seems good, should be easy.
I was wondering if we were going to put locations.json "metadata" into the backend as well. I thought it'd be nice to be able to change the human-readable label of categories on the frontend (in a map of location label/name to backend location id) without having to update the backend. Then again, we have the human readable labels for all the other data in the backend as well, so this is probably best for consistency.
Right, I did end up importing the metadata you referenced as tables into the database. I use your locations.json
and locations-data.json
files as input to the loading command, so if you want to update the database you could modify those files and send them to me, and I'll import them. We could use that as the means to update locations going forward, if that works for you. Also, once we have a proper data import pipeline you won't have to ask me; instead, you'll just edit the inputs and the pipeline will pick them up.
FWIW, the frontend changes look great! I demo'd them locally and everything worked nicely AFAICT.
Also, once we have a proper data import pipeline you won't have to ask me; instead, you'll just edit the inputs and the pipeline will pick them up.
Yeah, not an urgent matter probably. Just envisioning it'd be nice to be able to, for example, tweak a label name quickly when we get a request from Jan and you happen to be out of office.
Yeah, not an urgent matter probably. Just envisioning it'd be nice to be able to, for example, tweak a label name quickly when we get a request from Jan and you happen to be out of office.
If you want to retain control of that sort of thing in the frontend, you could supply your own file as you've done before and just ignore what I'm returning from the backend.
Looking forward, I'm hoping once I've got the pipeline in place that we can get into a rhythm where we make (versioned) changes in the input files, then have an automated process for creating a new database and deploying it. Ideally any dev on the team will be able to kick off that process, so if I'm out you'll be able to do it. Having the database as a single source of truth is IMHO a good thing; we'll know all the data and metadata is in agreement at a given point in time, versus it being spread across the stack and versioned separately.
@falquaddoomi I added some last commits to resolve (then fix) a merge conflict, and then just upgrading some packages in package.json (you can't see it because bun lock file is a binary, but all the installed packages except @headlessui-float/vue were already at their latest versions due to ^
pinning, so this really was just inconsequentially updating the package.json manifest itself).
I also updated the first comment to bullet-list the backend changes. Please update as needed, and that comment will automatically be the merge commit message.
Also we'll need to be ready to deploy the new backend right away, since this is a breaking change for the frontend (it expects the /locations
listing endpoint to be there or it wont show the map.
Perhaps I should change this so that locations failing is not a "critical" error (won't show the map), but the facets is... In reality (after this PR is fully deployed), I'd imagine almost certainly either both will fail or both will succeed, so it doesn't really matter.