Closed oscarandersson8218 closed 2 months ago
Note: Links to docs will display an error until the docs builds have been completed.
There are 2 currently active SEVs. If your PR is affected, please view them below:
linux.4xlarge.nvidia.gpu
instancesAs of commit da256dc23d7024b81f32a5e6f30ce30404c8990f with merge base dd7fa6a9d2efcae4f876d1ed08147a0e82ff024d (): :green_heart: Looks good so far! There are no failures yet. :green_heart:
This comment was automatically generated by Dr. CI and updates every 15 minutes.
I will take a look tomorrow.
@digantdesai has imported this pull request. If you are a Meta employee, you can view this diff on Phabricator.
So not clear to me if this is really making use of dim order utils? is this in preparation to that?
Yes this is the intermediate step. See convo on Slack - https://pytorch.slack.com/archives/C01FV3A914N/p1719838104994989?thread_ts=1718355881.095489&cid=C01FV3A914N
@digantdesai has imported this pull request. If you are a Meta employee, you can view this diff on Phabricator.
@digantdesai has imported this pull request. If you are a Meta employee, you can view this diff on Phabricator.
@digantdesai has imported this pull request. If you are a Meta employee, you can view this diff on Phabricator.
@digantdesai merged this pull request in pytorch/executorch@d3c92de23f33fe58752e957c0a787a5b44c21191.
Remove temporary fix for memory format introduced in https://github.com/pytorch/executorch/pull/2371. The dim-order of each node is annotated in a pass. Also some refactoring of arm_backend.py.