ibpsa / project1-boptest

Building Optimization Performance Tests
Other
101 stars 66 forks source link

Track points utilized by test controllers #514

Open dhblum opened 1 year ago

dhblum commented 1 year ago

This issue is to implement a mechanism in BOPTEST to keep track of the inputs, measurement, and forecast points utilized by a test controller.

A question is if the list of variables for each inputs, measurements, forecasts should be reported back from /kpis, or just the length of the lists, or reported in some other way?

To Do:

javiarrobas commented 1 year ago

I suggest having the list of accessed inputs, measurements and forecast variables returned by /kpis. I think it is important to return the list of variables, and not just the length of the lists since there are variables that are less likely to have in practice than others e.g. it is clear that accessing a heat flow measurement is less likely than a temperature measurement. That notion is something that we want to make explicit when evaluating the controller.

dhblum commented 1 year ago

I agree @JavierArroyoBastida with returning the full list of points utilized as it would be more informative.

dhblum commented 1 year ago

This issue should also remove the y return from /advance, which gives current state data for all variables. Users will instead need to specifically ask for data points using /results, so that this new feature may effectively track which points are being used.

SenHuang19 commented 1 year ago

I am a little concerned that changing this API makes BOPTEST more towards a control development environment rather than a control testing environment as it begins to create differences.

dhblum commented 1 year ago

@SenHuang19 Can you elaborate on what you mean by "create differences?" The intent is to keep track of which sensors, control points, and forecast variables are used by test controllers to help evaluate their relative need for data and actuation authority.

SenHuang19 commented 1 year ago

Sorry, I didn't make myself clear. I understand the necessity of having this feature. This feature doesn't make too many differences and what I commented on is whether we should allow customized settings via API in general. Strictly speaking, the testing period and even sampling interval should be predefined, and control developers need to include necessary changes inside their control sequence to accommodate those predefined parameters. This way, BOPTEST provides a solid foundation for control benchmarking.

dhblum commented 1 year ago

The testing period can be chosen using the /scenario API. The sampling interval of historic data is specified within boptest to 30 seconds. Generally, I think I agree with you. But I'm not clear on what settings you refer to when mentioning "customized settings via API", if not in this issue.

dhblum commented 1 year ago

Initial implementation which tracks inputs, measurements, and forecast points used in a controller test and reports in /kpis is here: https://github.com/ibpsa/project1-boptest/tree/issue514_variableTrack. An example call to /kpi using the bestest_air case as an example with various API requests already made looks like:

{
    "message": "Queried KPIs successfully.",
    "payload": {
        "cost_tot": 6.735242989734677e-05,
        "data_forecasts": [
            "TWetBul",
            "TDryBul",
            "relHum"
        ],
        "data_inputs": [
            "fcu_oveFan_u"
        ],
        "data_outputs": [
            "fcu_oveFan_u",
            "fcu_reaPHea_y",
            "zon_reaCO2RooAir_y"
        ],
        "emis_tot": 0.0029928990947886945,
        "ener_tot": 0.015336821141249136,
        "idis_tot": 0.0,
        "pdih_tot": null,
        "pele_tot": 0.00045063598356009064,
        "pgas_tot": 0.016124044385338487,
        "tdis_tot": 0.0,
        "time_rat": 0.001940555837419298
    },
    "status": 200
}

A couple things to note:

  1. Notice the use "outputs" instead of "measurements". This is because if a test controller queries /results for data of an "input", then it should be tracked differently than if a test controller uses /advance with an "input." Hence, in the above proposed scheme, "outputs" refers to all points queried using /results, whether they be classified by BOPTEST as "inputs" or "measurements." Also then in the above proposed scheme, "inputs" refers to all points overwritten when using /advance.
  2. A point is added to the "inputs" list only if the point is activated (e.g. calling $ curl -X POST localhost:5000/advance -d fcu_oveFan_u=0.5 -d fcu_oveFan_activate=0 will NOT add fcu_oveFan_u to the "inputs" list).
  3. A call to /initialize or /scenario with an argument for "time_period" will reset all point lists back to empty.
  4. The name scheme "data_" above not in stone.
dhblum commented 1 year ago

One thought on the need to remove current state data from returning from /advance so as to be able to track points utilized accurately, is that totally removing will require a user to use the /results API to retrieve current state information. This adds an extra API call in a loop which may slow down the test. One proposal is to add an optional argument to /advance called point_names, where a user can specify a list of which points for which they would like to receive the current values back in the return. So, it would be similar to current behavior, but with the point values returned specifically called for by the user instead of always all of the point values returned. This would also require less data transfer for the return, which might speed up tests a bit for large emulators.

HWalnum commented 4 months ago

I think point_names argument is a good solution.