NVIDIA / TransformerEngine

A library for accelerating Transformer models on NVIDIA GPUs, including using 8-bit floating point (FP8) precision on Hopper and Ada GPUs, to provide better performance with lower memory utilization in both training and inference.
https://docs.nvidia.com/deeplearning/transformer-engine/user-guide/index.html
Apache License 2.0
1.6k stars 255 forks source link

[Paddle] Add deterministic option in DotProductAttention #956

Open Wong4j opened 1 week ago

Wong4j commented 1 week ago

Description

The custormers need an option to enable deterministic Attention. Their usual setting is to turn on FLAGS_cudnn_deterministic=1. So, when user sets FLAGS_cudnn_deterministic=1, workspace optimization is enabled.

Fixes # (issue)

Type of change

Changes

Please list the changes introduced in this PR:

Checklist:

cyanguwa commented 1 week ago

Hi @Wong4j , could you use the same environment variable name as in PyTorch please? NVTE_FUSED_ATTN_FORCE_WORKSPACE_OPT. Just so we are more consistent across TE.

https://github.com/NVIDIA/TransformerEngine/blob/6ee92c4bcfdc78bd0c3c29bbbaf3c02d1fcacd51/transformer_engine/pytorch/attention.py#L4053-L4067

Thanks.

Wong4j commented 1 week ago

@cyanguwa The environment variable names are already consistent. I just copied this part from pytorch. The only difference is that in paddle, self.deterministic is turned on by self.deterministic = bool(int(os.getenv("FLAGS_cudnn_deterministic", "0"))). This is the way customers are used to doing it. While in pytorch, it's self.deterministic = (not bool(int(os.getenv("NVTE_ALLOW_NONDETERMINISTIC_ALGO", "1"))) or torch.are_deterministic_algorithms_enabled())

zlsh80826 commented 1 week ago

/te-ci paddle

jeng1220 commented 6 days ago

@Wong4j will implement NVTE_ALLOW_NONDETERMINISTIC_ALGO in this PR.

zlsh80826 commented 5 days ago

/te-ci paddle

zlsh80826 commented 21 hours ago

/te-ci paddle

jeng1220 commented 20 hours ago

LGTM

jeng1220 commented 7 hours ago

@cyanguwa , Could you please merge the code if anything looks fine?