NVIDIA / TransformerEngine

A library for accelerating Transformer models on NVIDIA GPUs, including using 8-bit floating point (FP8) precision on Hopper and Ada GPUs, to provide better performance with lower memory utilization in both training and inference.
https://docs.nvidia.com/deeplearning/transformer-engine/user-guide/index.html
Apache License 2.0
1.61k stars 256 forks source link

Add auto-formatter #919

Closed ksivaman closed 3 weeks ago

ksivaman commented 3 weeks ago

Description

Lays out initial infrastructure for auto-formatting/linting the codebase. The key challenge is to find the optimal configs for linters and formatters that do not contradict each other whilst still being useful and increasing productivity. A following PR will enable pre-commit.ci and include all the formatting changes.

Type of change

Changes

  1. Introduces integration with precommit.ci that autoformats the codebase on every PR.
  2. Use clang-format for core lib, c++, and cuda files/extensions.
  3. Use black for python files.
  4. Change linting configs for cpplint and pylint to be compatible with formatters.
  5. Moves some linting functionality such as readability, whitespaces, line lengths etc. from the linters to the formatters such that the 2 do not collide.
  6. Consolidates pylint and cpplint config files to top level.

Checklist:

ksivaman commented 3 weeks ago

pipeline 15815486