mberr / torch-max-mem

Decorators for maximizing memory utilization with PyTorch & CUDA
https://torch-max-mem.readthedocs.io/en/latest/
MIT License
14 stars 0 forks source link

Multi-level optimization #10

Closed mberr closed 1 year ago

mberr commented 1 year ago

Extend the decorators to accept multiple parameters to be optimized.

They will be optimized in sequence, i.e. the first parameter will be reduced while OOM errors are encountered. Only when this parameter reaches 1 will the next parameter be reduced, and so on.

For a use case, see https://github.com/pykeen/pykeen/pull/1261.

The PR also includes some repo cleanup necessary to get the pipeline working again.