Open Wong4j opened 1 week ago
Hi @Wong4j , could you use the same environment variable name as in PyTorch please? NVTE_FUSED_ATTN_FORCE_WORKSPACE_OPT
. Just so we are more consistent across TE.
Thanks.
@cyanguwa The environment variable names are already consistent. I just copied this part from pytorch.
The only difference is that in paddle, self.deterministic is turned on by self.deterministic = bool(int(os.getenv("FLAGS_cudnn_deterministic", "0")))
. This is the way customers are used to doing it.
While in pytorch, it's self.deterministic = (not bool(int(os.getenv("NVTE_ALLOW_NONDETERMINISTIC_ALGO", "1"))) or torch.are_deterministic_algorithms_enabled())
/te-ci paddle
@Wong4j will implement NVTE_ALLOW_NONDETERMINISTIC_ALGO
in this PR.
/te-ci paddle
/te-ci paddle
LGTM
@cyanguwa , Could you please merge the code if anything looks fine?
Description
The custormers need an option to enable deterministic Attention. Their usual setting is to turn on
FLAGS_cudnn_deterministic=1
. So, when user setsFLAGS_cudnn_deterministic=1
, workspace optimization is enabled.Fixes # (issue)
Type of change
Changes
Please list the changes introduced in this PR:
Checklist: