Closed dorimedini closed 5 years ago
@galshachaf @noamloya design question:
If we think in-memory models are going to murder our RAM maybe we should abstract away the trained model
object? Wrap it in in a context manager, something like:
class ModelReader:
def __enter__(self, model_name):
# Either train the model or load it from disk using model_name.
# Either way, loads model to RAM
self.model = model
return model
def __exit__(self, type, value, traceback):
del self.model # This frees up the RAM right?
and then whenever we need to access a model we do:
with ModelReader('mnist_fc3_vanilla') as model:
print("Look at my {}".format(model))
Sounds good?
Currently the
Baseline
class, during phase2, loads all model checkpoints to RAM before analysis. We should editResourceManager
to allow loading specific model checkpoints and then use it inBaseline
(possibly add methods toExperimentWithCheckpoints
for this task)