NVIDIA / TransformerEngine

A library for accelerating Transformer models on NVIDIA GPUs, including using 8-bit floating point (FP8) precision on Hopper and Ada GPUs, to provide better performance with lower memory utilization in both training and inference.
https://docs.nvidia.com/deeplearning/transformer-engine/user-guide/index.html
Apache License 2.0
1.85k stars 309 forks source link

Replace functools cache with lru_cache #967

Closed timmoon10 closed 3 months ago

timmoon10 commented 3 months ago

Description

functools.cache was added in Python 3.9, leading to problems with older Python versions (see https://github.com/NVIDIA/TransformerEngine/issues/958). The fix is to replace it with functools.lru_cache(maxsize=None), which is equivalent.

I've also made some minor stylistic changes in the affected code:

Type of change

Changes

Checklist:

timmoon10 commented 3 months ago

/te-ci