NREL / ComStock

National scale modeling of the U.S. commercial building stock supported by U.S. DOE, LADWP, and others and maintained by NREL
Other
32 stars 10 forks source link

Optimize the memory footprint during postprocessing #149

Open wenyikuang opened 8 months ago

wenyikuang commented 8 months ago

Why?

Right now it takes more than 200 G memory to run the sightglasspostprocessing in generate_metadata. Which cause it buggy and painful to run. And in the long term the data size will grow in O(n) if we load the whole thing into memory and do editorial.

Which is not nessasray.

How? Probably by the lazy load offer from polars. Should probably need: Prune the logic to a MVP protocal, then rewrite the indexing/load logic, add the feature back.

Restriction:

When: Before next release

Target: hopefully next release we could use a normal PC (~100G Ram) to finish the work

asparke2 commented 7 months ago

Ideally if each upgrade can be processed sequentially, it can use < 32GB of RAM per upgrade and therefore be run on anyone on the team's machines.