pytorch / torchtitan

A native PyTorch Library for large model training
BSD 3-Clause "New" or "Revised" License
2.29k stars 170 forks source link

numerical difference for SDPA between non-dtensor vs dtensor, when math attention and fp16 are used #317

Open tianyu-l opened 4 months ago

tianyu-l commented 4 months ago

Higher loss (9.5602 vs. 9.3164) was observed for the dtensor case, after 10 steps on the llama2 debug model. This happens even without applying rotary embedding, and the complex number multiplication issue mentioned in #267.

Note: to apply math attention with dtensor, one needs to set _allow_implicit_replication to true (because a non-dtensor mask will be generated if is_causal=True for SDPA).

This issue doesn't seem to be urgent, as math attention is only a fallback option for flash attention and memory-efficient attention.

kwen2501 commented 3 months ago

Is the numeric difference seen in backward only or in forward too?