This is a sub-PR for #807, replacing the analysis-side parts with a different approach.
My concerns with the upstream approach are:
it repeats itself, writing an extra copy of the speed limit files that are already on disk and running the logic for looking up the right speed limits from those files multiple times
it adds analysis logic to the Django app. Which we already sort of do, with the status updates and loading overall scores, but this was doing more, including writing out a results file from the Django app, which won't work in the case of a local analysis run that's not connected to a Django app.
it's missing the step to read the speed limit and save it on the job instance during import
I wasn't sure how to do a good job of describing what I was picturing instead, nor was I sure it was actually coherent and complete, so I did it in code.
This version of things:
makes a new table to hold only the speed limit value(s) we looked up for the neighborhood, so we only do the lookup/filter logic once
exports a CSV directly in the analysis script, so it won't depend on the app being present
when running an analysis with an actual job ID, runs the management command to load the speed limit onto the instance (like the upstream version does, but based on the new exported CSV rather than the full tables)
makes the upload_local_analysis task look for the file and load the speed limit if it's there (failing quietly if not, so it won't cause older exported analysis runs to crash)
Notes
The model changes included a speed_limit_src field that I don't think was being filled in. I made the load_speed_limit function set that value as well. It doesn't show up in the front-end, but it seems interesting enough to keep, to me.
Related to the above, I decided to export both state and local limits rather than setting the table up as just one number and its source. I'm not sure that's the right way to do it--it does mean there's logic in the app that repeats part of what the analysis is doing--but it seemed simpler to implement (by which I guess I mean "less conditional logic in bash")
Testing Instructions
There are a few different permutations of this thing:
Running an analysis with a job ID
Running one without, then zipping up the contents of the results directory and importing it
For the import part, I have an example file I put on S3, for Appleton, Wisconsin: https://s3.amazonaws.com/khoekema-pfb-storage-us-east-1/uploads/appleton_wi_results.zip
Importing an old results file (e.g. https://s3.amazonaws.com/khoekema-pfb-storage-us-east-1/uploads/green_bay_results.zip), which should work but not show the speed limit box
Overview
This is a sub-PR for #807, replacing the analysis-side parts with a different approach.
My concerns with the upstream approach are:
I wasn't sure how to do a good job of describing what I was picturing instead, nor was I sure it was actually coherent and complete, so I did it in code.
This version of things:
upload_local_analysis
task look for the file and load the speed limit if it's there (failing quietly if not, so it won't cause older exported analysis runs to crash)Notes
speed_limit_src
field that I don't think was being filled in. I made theload_speed_limit
function set that value as well. It doesn't show up in the front-end, but it seems interesting enough to keep, to me.bash
")Testing Instructions
There are a few different permutations of this thing:
https://s3.amazonaws.com/khoekema-pfb-storage-us-east-1/uploads/appleton_wi_results.zip
https://s3.amazonaws.com/khoekema-pfb-storage-us-east-1/uploads/green_bay_results.zip
), which should work but not show the speed limit boxConnects to #804