ybracke / transnormer

A lexical normalizer for historical spelling variants using a transformer architecture.
GNU General Public License v3.0
6 stars 1 forks source link

Turn limitation of memory usage into a configurable property #88

Open ybracke opened 7 months ago

ybracke commented 7 months ago

In train_model.py and generate.py

Currently the limitation on 80% memory usage is hard-coded:

# limit memory usage to 80% # XXX make it faster by leaving this out
if torch.cuda.is_available():
    torch.cuda.set_per_process_memory_fraction(0.8, device)

Better: