This repository hosts the beta implementation of the Urban Online ES Workflow. The project is intended to give urban planners the ability to create and assess scenarios using InVEST Urban models.
1
stars
5
forks
source link
Iteration for running InVEST models and establishing endpoints #83
This PR is a stepping stone for the backend structure that will request, run, and digest InVEST models / results. There will likely be many iterations of this but here are the big beats this PR introduces:
worker.py now handles running each model as a separate TaskGraph task. Once the model is completed the worker generates a random number between 1-10 to return to the server API.
server/sql_app/main.py handles the PriorityQueue comparison issue by adding a 3rd element to the tuple that acts as a tie-breaker. The first element is one of 3 levels of priority, the second new element is the job_id which should be unique, and the 3rd element is the dict task
New InvestResult table in server/sql_app/models.py. This includes updates for schema.py and crud.py. The only result stored is the random number returned by worker.py. We already discussed how this might change by storing results in a CSV.
Currently models are using InVEST sample data with an altered datastack json file. The worker.py is expecting the data to live in appdata/invest-sample-data/[model-name]/. I've uploaded a zip to Google Drive that should make it easy to set up. We'll handle this via the container / cloud in the future.
Frontend endpoint to run InVEST models is /invest/{scenario_id} and to get the result (currently of just one InVEST model run, which again will likely change) /invest/result/{job_id}/{scenario_id}
This PR is a stepping stone for the backend structure that will request, run, and digest InVEST models / results. There will likely be many iterations of this but here are the big beats this PR introduces:
worker.py
now handles running each model as a separate TaskGraph task. Once the model is completed the worker generates a random number between 1-10 to return to the server API.server/sql_app/main.py
handles the PriorityQueue comparison issue by adding a 3rd element to the tuple that acts as a tie-breaker. The first element is one of 3 levels of priority, the second new element is the job_id which should be unique, and the 3rd element is the dict taskInvestResult
table inserver/sql_app/models.py
. This includes updates forschema.py
andcrud.py
. The only result stored is the random number returned byworker.py
. We already discussed how this might change by storing results in a CSV.worker.py
is expecting the data to live inappdata/invest-sample-data/[model-name]/
. I've uploaded a zip to Google Drive that should make it easy to set up. We'll handle this via the container / cloud in the future./invest/{scenario_id}
and to get the result (currently of just one InVEST model run, which again will likely change)/invest/result/{job_id}/{scenario_id}