Closed vkuzo closed 3 months ago
@vkuzo has imported this pull request. If you are a Meta employee, you can view this diff on Phabricator.
@vkuzo has imported this pull request. If you are a Meta employee, you can view this diff on Phabricator.
recreated in https://github.com/pytorch-labs/float8_experimental/pull/276 to get around ghstack weirdness
Stack from ghstack (oldest at bottom):
273
272
Summary:
The mixin was originally used to share code with Float8 versions of RowParallelLinear and ColParallelLinear. Since we moved those to DTensor, the mixin is not needed anymore. Removing it to simplify the code in preparation of upcoming delayed scaling improvements.
In addition, making the from_float conversion use meta device to speed it up.
Test Plan:
Reviewers:
Subscribers:
Tasks:
Tags:
Differential Revision: D58396872