Closed dulinriley closed 1 week ago
Note: Links to docs will display an error until the docs builds have been completed.
As of commit fe6193e15de3d302cceaa73b0a97c1840871a909 with merge base c83af25b76254e96ad0e67ddb0593ad8d039dc66 (): :green_heart: Looks good so far! There are no failures yet. :green_heart:
This comment was automatically generated by Dr. CI and updates every 15 minutes.
This pull request was exported from Phabricator. Differential Revision: D56894215
This pull request was exported from Phabricator. Differential Revision: D56894215
This pull request was exported from Phabricator. Differential Revision: D56894215
This pull request was exported from Phabricator. Differential Revision: D56894215
This pull request was exported from Phabricator. Differential Revision: D56894215
This pull request has been merged in pytorch/executorch@b93b7ae4ad00ac15ab5ade347fa0d2ce5756e32e.
Summary: Some users of
constant_prop_pass
want to fold across calls tofull
, because representing a tensor as a program constant is a requirement for some backends. This came up when writing some tests usingtorch.ones
as a weight tensor, which is represented asaten.full
in Edge Dialect.When the user specifies a custom skip set, do not add the default
aten.full
function, in case the user doesn't want it.Differential Revision: D56894215