cli99 / llm-analysis

Latency and Memory Analysis of Transformer Models for Training and Inference
Apache License 2.0
343 stars 40 forks source link

add sharded data parallel all gather time estimation #15

Closed cli99 closed 11 months ago

cli99 commented 11 months ago

When using fsdp shard_grad_op and full_shard, model weights are unsharded through allgather before being used for execution. This PR adds the time estimation of the all gather. If the all gather time > layer compute time, all gather time cannot be fully overlapped with compute. The time estimation relies on intra node and inter node memory bandwidth efficiency, which depends on the size of the data to transfer. 0.8 is a good estimation for data size > 128MB. for small data size, set the efficiency to a lower number.

codecov-commenter commented 11 months ago

Codecov Report

Attention: 6 lines in your changes are missing coverage. Please review.

Comparison is base (21d9fa3) 80.33% compared to head (60fc071) 80.64%. Report is 2 commits behind head on main.

Files Patch % Lines
llm_analysis/analysis.py 88.23% 6 Missing :warning:

:exclamation: Your organization needs to install the Codecov GitHub app to enable full functionality.

Additional details and impacted files ```diff @@ Coverage Diff @@ ## main #15 +/- ## ========================================== + Coverage 80.33% 80.64% +0.30% ========================================== Files 8 8 Lines 1083 1121 +38 ========================================== + Hits 870 904 +34 - Misses 213 217 +4 ```

:umbrella: View full report in Codecov by Sentry.
:loudspeaker: Have feedback on the report? Share it here.