Closed DavidUdell closed 5 months ago
Before I have anyone else using this, I'll need this done. But until then, I can just eat the cost of renting a good chunk of memory.
There are other efficiency improvements possible that I'll take care of along with this: e.g., THRESHOLD
comparisons should take place right after ablations, breaking that loop if THRESHOLD
isn't met. Worst case time complexity will remain the same, but practical runtime will improve by some significant multiple.
Because I collect all data upfront now, before processing it into a graph, memory complexity in
cognition_graph.py
is $O(n^2)$. But it could be $O(n)$, without changing runtime, if I refactor that script—I'd just need to have the script boil down individual ablations into .dot file entries, before looping back and passing through all the other individual ablations. There's no special reason to do all the ablations up front, apart from design simplicity.