ladybug-tools / honeybee-viewer

Web-based viewer to visualize honeyebee schema
https://www.ladybug.tools/honeybee-viewer/
MIT License
3 stars 0 forks source link

Recommended queries for data visualization #18

Open chriswmackey opened 4 years ago

chriswmackey commented 4 years ago

To give some guidance on the most useful things in the energyplus sql file, I should preface by saying most tables in the file do not contain particularly useful information. In fact, all of the data that I currently import with the Grasshopper plugin comes from just ~4 tables:

I would recommend starting with the first one (ReportData), which contains all of the timeseries simulation results for each Room in the model. This sql result file here has 7 Rooms/Zones so you will find 7 lists of hourly data for each simulation output that was requested. In that particular sqlite file you are using for testing, I think I requested the following outputs:

You can see the sql queries that I used to extract these lists of timeseries data in this Python code here. The output_name is one of the 4 strings above.

Steps to Get Timeseries Data

To simplify the steps you should follow to get a matrix with 7 rows (one for each zone) and 168 columns (one for each hour of the simulation), you should:

  1. Run this query on the sql file:

'SELECT * FROM ReportDataDictionary WHERE Name="Zone Ideal Loads Supply Air Total Heating Energy"' (note that any of the 4 outputs I listed there can be used instead of ideal loads cooling)

  1. This will return a list of rows that include indices of data to be retrieved from the ReportData table. These indices should be the first item in the list of the returned list of rows. So you can get them with:

rel_indices = [row[0] for row in returned_rows]

  1. Then, you want to get the value of the data from the ReportData table. Run the following query:

'SELECT Value FROM ReportData WHERE ReportDataDictionaryIndex IN rel_indices' (note that rel_indices should be the list of values that you obtained in the previous step)

  1. This will return a single list with all of the timeseries data in it for all 7 zones. You need to partition it based on the number of rel_inidices that you have. So you will need to run some code like this:
    n_lists = len(rel_inidices)
    all_values = []
    for i in range(0, len(data), n_lists):
    all_values.append([val[0] for val in data[i:i + n_lists]])
    final_matrix = zip(*all_values)

    Note that the data variable is the stream of data returned from step 3. Also note that the last line there is just transposing the matrix.

mostaphaRoudsari commented 4 years ago

Thank you @chriswmackey!

@theo-armour, I know you are having so much fun with the idf viewer. Whenever you are done with that this result visualization step can be a very interesting one! 😀

theo-armour commented 4 years ago

@mostaphaRoudsari

I very much appreciate that the Spider SQL Viewer could be a very nice thing. I look forward to updating the viewer in many ways. There are circumstance that would greatly help.

I have googled looking for other examples, but currently the one and only test case I know about about and have access to is this one:

https://github.com/ladybug-tools/honeybee-energy/tree/master/tests/result

Furthemore, I have no idea what the raw model should actually look like let alone what any of the test result simulations should look like. In contrast I have access to over 700 IDF models and dozens of gbXML models.

But I do understand that the Pollination tools will be able to generate hundreds of simulations. Can't wait!! Fingers crossed each one includes a representation of the model in a PNG.

mostaphaRoudsari commented 4 years ago

Hi @theo-armour, that's a catch 22 situation! Without a viewer Pollination will not be able to generate the models with the results but you need that first to develop the viewer! 😀

An alternative solution is to use Honeybee plugin for Grasshopper to automate the runs and results visualization and generate the images to help you with the process. That is possible but it will take some time before myself or Chris get the chance to generate those.