natcap / urban-online-workflow

This repository hosts the beta implementation of the Urban Online ES Workflow. The project is intended to give urban planners the ability to create and assess scenarios using InVEST Urban models.
1 stars 5 forks source link

Run Carbon & Urban Cooling with user-data #86

Closed davemfish closed 1 year ago

davemfish commented 1 year ago

This PR sets up runs of Carbon & Urban Cooling models using user-created LULC scenarios and bioregion-dependent parameter values.

I added sets of biophysical tables to this repo. These have parameters that vary by bioregion. Other parameters will need to vary by specific location (e.g. urban heat index, reference temperature). These values are hardcoded for now, but can be populated at runtime. args dictionaries for model runs are created dynamically to handle these cases and others, like creating an AOI vector.

@dcdenu4 let me know if you think the tables should live somewhere else besides this repo. It seems nice to have them under version control. But we also have other input data that are not text files to deal with. For now I've listed those datasets in the README and put copies of them in our project bucket (see README).

dcdenu4 commented 1 year ago

Hey @davemfish, this feels like a good step in the right direction. I think having the biophysical tables in the repo, broken out upfront into respective bioregions is a clean approach that let's us track changes to those tables and avoids any dynamic computing on start up of the server / worker. Having said that, something is bugging me about having so many CSVs tracked like this but I can't put my finger on it yet. My initial instinct was to essentially run the script you added for generating the CSVs as a startup step for the worker, and cache those CSVs. That way we'd just have the single biophysical tables tracked and wouldn't have to worry about running that script manually, it'd just be done dynamically on app start. But that just shifts responsibility slightly and is maybe more complicated... ramble over.

So I think this is a good approach and like breaking out a separate module from the worker to handle args setup.

davemfish commented 1 year ago

Hey @davemfish, this feels like a good step in the right direction. I think having the biophysical tables in the repo, broken out upfront into respective bioregions is a clean approach that let's us track changes to those tables and avoids any dynamic computing on start up of the server / worker. Having said that, something is bugging me about having so many CSVs tracked like this but I can't put my finger on it yet. My initial instinct was to essentially run the script you added for generating the CSVs as a startup step for the worker, and cache those CSVs. That way we'd just have the single biophysical tables tracked and wouldn't have to worry about running that script manually, it'd just be done dynamically on app start. But that just shifts responsibility slightly and is maybe more complicated... ramble over.

So I think this is a good approach and like breaking out a separate module from the worker to handle args setup.

I generally share your instincts about having so many files in the repo. I'm open to tracking the larger, composite tables instead. But like you said, then we would have to run code to create the filtered table(s), either on startup or just prior to the invest run. As-is, we won't really have to run that script or think about the 100s of files again. It's possible more choices will arise as we setup & parameterize more models, so let's stay open-minded about changing things, but maybe just leave it as-is for now.