pyswmm / Stormwater-Management-Model

PySWMM Stormwater Management Model repository
Other
99 stars 77 forks source link

Input API #2

Open bemcdonnell opened 8 years ago

bemcdonnell commented 8 years ago

I think it's time to dream up an Input API that will find its way into the formal distribution of SWMM. :)

@rkertesz @fmyers @michaeltryby

bemcdonnell commented 8 years ago

EPANET has a toolkit API containing some good starting point ideas.

https://github.com/OpenWaterAnalytics/EPANET/blob/master/src/toolkit.h Lines 243 and 244 reference the functions to "set" values for a node and a link. This could be expanded in SWMM to adjust hydrology and Ground water parameters.

samhatchett commented 8 years ago

You and I must be drinking from the same well. I've been thinking about this recently too. Would be nice to brainstorm current/future needs for both softwares, maybe nail down a common set of conventions?

bemcdonnell commented 8 years ago

@samhatchett Good timing!

So the main SWMM InputAPI use case I foresee is for calibration - mostly oriented toward planning. However, there has been talk up here regarding parameter manipulation during simulation time (@rkertesz @lmontest )has some thoughts on this).

samhatchett commented 8 years ago

the ultimate goal on the EPANET side is to have its code split up into three different modules:

of course there's a fair amount of work to do. I'm working on scraping together some thoughts related to structuring of the API for creating/altering network topology - I'd be interested in your thoughts once I have a draft.

rkertesz commented 8 years ago

I'll let @lmontest chime in but, while we currently can manipulate anything that is controlled in the control section (like orifices), what would be nice is intra-simulation manipulation of things that aren't exposed right now, such as a property for a pipe. This can help not only with calibration and tuning as well as controlling for variables like sediment during a long-term simulation.

bemcdonnell commented 8 years ago

@samhatchett, could you elaborate on your goals for the "Project Definition I/O" point?

samhatchett commented 8 years ago

sure, what I mean is pretty simple. for maintainability and extensibility, we need to start pulling the code apart into logical units. the core of that outcome is going to be a streamlined hydraulic engine and associated structs (what i referred to as the "project definition"). you could also call it the "business logic / data model" unit. we will need to provide an API into that data model, which will be capable of programmatically constructing/ altering a hydraulic model and its associated parameters. Everything that is now done through the specification in the inp file format should eventually be handled by the Project API, and we can remove the dependence on the text file. It should not matter where or how the network is defined - text, json, sql, binary, protobuf... Similarly for storage/retrieval of simulation output data - there's really no reason a hydraulic analysis engine needs to ever use fopen - this is just a basic level of the separation of concerns.

lmontest commented 8 years ago

As it relates to the work we engage regularly, we find the need to be able to manipulate an asset based on the status of the system. Ideally designing RTC algorithms that can “measure” the state of the system (flows, depths, capacities, even water quality) and adjust setpoints for pumps, orifices, weirs, etc. This means that the control rules must be able to interact during simulation time with the model. The ideal situation would allow an external program to receive hydraulic status of select assets periodically and adjust setpoints at such points in time. To that end, and in our own effort to enable such applications, we have modified SWMM to allow simulations to halt periodically, populate an output database table with hydraulic states, allow time for a program to consume such data to generate appropriate setpoints, put them in an input database table, let SWMM to consume the new setpoints, and resume the simulation. The choice of using a database for the interaction with SWMM was driven by the ease of modifying the code and enable a variety of programs to interact with the data through ODBC connectors. We have demonstrated this by building a controller in Matlab. An even nicer application would be to interface SWMM directly with a PLC. I think that porting similar functionality with a DLL API should be straightforward. The main idea remains to break SWMM out of being a modeling tool and make it available to applications such a operators training modules, real time scenario analysis, and real time control applications. Cheers!

bemcdonnell commented 6 years ago

@lrntct, i must have removed that at some point. But truth be told; I doubt any of the ideas I had then were worthwhile :-)

Here’s some stuff on the api. https://github.com/OpenWaterAnalytics/Stormwater-Management-Model/wiki/Enhanced-API

I think we should have a roadmapping meeting to talk about the future of the data model and API

lrntct commented 6 years ago

One great addition would be to be able to add or remove objects from the simulation. As of now, the objects are stored in arrays of structs. My basic C knowledge tells me that resizing an array is not a very efficient operation, so it might not be ideal to add objects one by one. One possibility would be to change the data structure and maybe use one from C++ (std::vector?), but it will be an important change. Any thought on what could be possible in pure C?

bemcdonnell commented 6 years ago

My basic C knowledge tells me that resizing an array is not a very efficient operation, so it might not be ideal to add objects one by one.

@lrntct, I'm in agreement. I've read up on a few strategies to accomplish what we are trying to do. I just read one thread that essentially does a realloc but instead of adding a single item at a time, it adds a buffer of x number of items.

https://stackoverflow.com/questions/12917727/resizing-an-array-in-c

realloc is a relatively expensive call, so you (generally) don't want to extend your buffer one element at a time like I did here. A more common strategy is to pick an initial starting size that covers most cases, and if you need to extend the buffer, double its size.

If we can pre-process an entire network before starting a simulation (which I imagine we are all in agreement).. we could add and remove items, and do a final sweep before running through validation.

One possibility would be to change the data structure and maybe use one from C++ (std::vector?), but it will be an important change

I also agree that this would be an important change.

The catch to this whole thing is going to be managing the connections. Links current index the upstream and downstream node indexes. So we are going to need to come up with a strategy to make sure as things change, the indeces still work out.

@michaeltryby has been recommending the project incorporate a transactional data model. He can better explain his ideas

samhatchett commented 6 years ago

we've done some similar work on EPANET -> https://github.com/OpenWaterAnalytics/EPANET/issues/43 and https://github.com/OpenWaterAnalytics/EPANET/pull/88

In basic form, we do use a realloc - which may end up being inefficient, but the primary purpose of the addition was to expand what was possible to do with the toolkit. And once we build and profile that capability, we can then optimize. And since it was done outside the "customary" procedure of opening an inp file, it doesn't affect established use-cases performance-wise.

A transactional model may make plenty of sense, but smells more like an extension than something to build into the engine API.

abhiramm7 commented 2 years ago

@bemcdonnell do we want to keep this ticket open?