Open dt-woods opened 11 months ago
I think the bulk data when we first developed this was only 1GB or something.
Originally my concerns were with the time it took to process the file, and I didn't really see many ways to optimize it because of the way the data is stored. Breaking the file up seems a reasonable interim plan - is it possible to search for changes in data year and break it up that way? Similarly, reading the file in chunks.
In some future version where we use EIA API, this problem likely goes away, and we can at least process jsons rather than plain text.
Please, please, please, let there be an API for that!
Now that I'm on to testing ELCI_3, I'm hitting more seg faults (and one bus fault) and it's giving me flashbacks to my early coding career when I used to do too much with passing variables globally. There are a lot of hints of that going on here, especially where modules are imported within scope of a method, initializing globals used elsewhere, globals being referenced in methods, globals being sliced and modified. All are a good recipe for unmanaged memory. Best advice I can give (and not sure how much can be implemented given the scope) is the following:
Added new checks for bulk data vintage to trigger a new download with the latest data.
The latest runs of ELCI_1 trigger the
ba_io_trading_model
in eia_io_trading.py. There is a bulk data file and during the call toba_exchange_to_df
, the memory demand spikes to >11 GB. I've hit Python segmentation faults during this, which was solved by restarting my computer and re-running. Seems worthy of a cautionary tale for users.I see a few instances of memory management where the massive lists of strings are deleted after processing.
In response, I started to parse out subroutines from the really long method. Not sure what all else can be done given the shear size of the bulk text file (>3 GB) and that it's stored primarily as Python string objects. I might look into optimizing the data types when the text file is processed. An alternative may just be to break the monster file into smaller files, process them individually, then put it back together.