zhuhaozhe / pytorch

Tensors and Dynamic neural networks in Python with strong GPU acceleration
https://pytorch.org
Other
0 stars 1 forks source link

Set Float32 Precision for CONV/RNN #6

Open zhuhaozhe opened 8 months ago

zhuhaozhe commented 8 months ago

RFC: Extend set fp32 precision API to support Convolution and RNN

Overview

This RFC proposes the addition of a user-controlled frontend API to configure the internal precision of float32 operations in convolutional (CONV) and recurrent neural networks (RNN) within PyTorch. Currently, PyTorch offers torch.set_float32_matmul_precision to configure the internal precision of float32 matrix multiplication. This RFC suggests extending this functionality to include convolution and recurrent neural network operations, providing torch.set_float32_conv_precision and torch.set_float32_rnn_precision. The proposed APIs will mimic the behavior of torch.set_float32_matmul_precision.

Frontend Changes

Frontend changes involve introducing new APIs:

These APIs will function similarly to torch.set_float32_matmul_precision and torch.get_float32_matmul_precision. Users can set the precision to highest, high, or medium, each with corresponding backend behavior:

Backend Changes

Global flags float32_conv/rnn_precision will be introduced at this location in the PyTorch repository. This flag can be accessed and modified by the frontend APIs torch.get/set_float32_conv/rnn_precision. Backend-related operators will read this flag to control the internal computation data types. For example:

Flag Overrides

The existing CUDNN backend-specific flag torch.backends.cudnn.allow_tf32 will interact with the proposed backend-irrelevant flag torch.set_float32_conv/rnn_precision. These flags will override each other( we follow similar behavior between torch.backends.cuda.matmul.allow_tf32 and float32_matmul_precision):

Additional CuDNN Flag

We discussed how the existing CuDNN flag, torch.backends.cudnn.allow_tf32, interacts with torch.set_float32_conv/rnn_precision. However, we believe it is cleaner to use separate flags in CuDNN. We suggest deprecating torch.backends.cudnn.allow_tf32 in favor of torch.backends.cudnn.conv.allow_tf32 and torch.backends.cudnn.rnn.allow_tf32. Then, the CuDNN backend-specific flags and backend-irrelevant flags can have a one-to-one correspondence, such as torch.backends.cuda.matmul.allow_tf32 and torch.float32_matmul_precision

torch.backends.cudnn.conv.allow_tf32 <-> torch.float32_conv_precision
torch.backends.cudnn.rnn.allow_tf32 <-> torch.float32_rnn_precision
# below flags are already existing now
torch.backends.cuda.matmul.allow_tf32 <-> torch.float32_matmul_precision

Motivation

Lower-precision computation from different backends can significantly improve performance for deep learning workloads with minimal impact on precision. For example, TF32 from CUDA/CUDNN or implicit reduced precision arithmetic feature from oneDNN. By providing a user-controlled frontend API, users can easily configure the internal computation data type of convolutional and recurrent neural networks without knowing the detail of different backends. This allows them to leverage the performance benefits of lower precision while ensuring acceptable precision loss. Compared to Autocast, the proposed flags offer:

Pitch

Introduce float32_conv/rnn_precision and enable users to control the internal data type for convolutional and recurrent neural networks by configuring the value of float32_conv/rnn_precision.

leslie-fang-intel commented 8 months ago
leslie-fang-intel commented 8 months ago

When the precision is high, the CUDA/CUDNN backend will be allowed to use TF32 as the internal computation data type. When the precision is medium, the MKLDNN backend will be allowed to use BF16 as the internal computation data type.

Refer to https://pytorch.org/docs/stable/generated/torch.set_float32_matmul_precision.html#torch-set-float32-matmul-precision, it should apply to all the backends?

leslie-fang-intel commented 8 months ago

For the 2 design options in Frontend API and Inductor linear packable, do we have any preferred option now? If so, we may talk about our preference for implementation.

zhuhaozhe commented 8 months ago

When the precision is high, the CUDA/CUDNN backend will be allowed to use TF32 as the internal computation data type. When the precision is medium, the MKLDNN backend will be allowed to use BF16 as the internal computation data type.

Refer to https://pytorch.org/docs/stable/generated/torch.set_float32_matmul_precision.html#torch-set-float32-matmul-precision, it should apply to all the backends?

Yes, I changed it to all backends instead of MKLDNN or CUDA

zhuhaozhe commented 8 months ago

Thanks, changed.

jgong5 commented 8 months ago

Please add notes on how CUDA can support the new frontend APIs since it is general APIs that can be applied to all backends.

zhuhaozhe commented 8 months ago

Please add notes on how CUDA can support the new frontend APIs since it is general APIs that can be applied to all backends.

Thanks for advice, added.