Currently the limitation on 80% memory usage is hard-coded:
# limit memory usage to 80% # XXX make it faster by leaving this out
if torch.cuda.is_available():
torch.cuda.set_per_process_memory_fraction(0.8, device)
Better:
First: Does the current approach actually work? Run an experiment.
Update training_config/test_config to have an attribute for the percentage of memory, probably under gpu
Pass this attribute to the set_per_process_memory_fraction
In
train_model.py
andgenerate.py
Currently the limitation on 80% memory usage is hard-coded:
Better:
gpu
set_per_process_memory_fraction