Closed vkuzo closed 4 months ago
@vkuzo has imported this pull request. If you are a Meta employee, you can view this diff on Phabricator.
This pull request has been merged in pytorch-labs/float8_experimental@ec8b46cda737cb72c0769eba42341edf50111e22.
Stack from ghstack (oldest at bottom):
Summary:
for matmul benchmarks, unbreaks them - we need the scales to be fp32, not integers
for linear benchmarks, aligns default settings to current best supported path (compile on, dynamic scaling)
Test Plan:
Reviewers:
Subscribers:
Tasks:
Tags:
Differential Revision: D59877198