Closed eichblatt closed 2 years ago
This is going to involve a lot of work. But I think that without it, the memory footprint will be too large to fit on the 3A+
I am thinking of caching the tape identifiers in this way:
sd = {k:[x.identifier for x in v] for k,v in m.archive.tape_dates.items()}
with open('/home/deadhead/deadstream/timemachine/metadata/georgeblood_tapes.json','w') as fp:
json.dump(sd, fp)
Then I can load the id_cache instead of the full thing.
I will need a function which can use the identifier to look up the details and actually create a "GDTape" object.
I didn't do the deferred loading, but I did speed it up significantly
Maybe the way to implement this is to have the archive take a year_range argument. The knobs can be configured to work over the period from 1898 - 1960 When the knobs select a range, a new archive object can be created (destroying the old one) for the selected range.
Loading the entire 78 rpm's collection takes about a minute. Deferred loading would be efficient, since most of the collection is never accessed.