pytorch / executorch

On-device AI across mobile, embedded and edge for PyTorch
https://pytorch.org/executorch/
Other
2.21k stars 368 forks source link

Explicitly pass buffer sizes during memory planning when control flow submodule are around #6840

Open sxu opened 1 week ago

sxu commented 1 week ago

Summary: It's less error prone to have the buffer sizes passed as parameter and return value than implicitly updated via nonlocal or reference stored on submodule. Fix a bug where if a new buffer is introduced within a submodule it gets ignored by the top level apply_algo call.

Differential Revision: D65915559

pytorch-bot[bot] commented 1 week ago

:link: Helpful Links

:test_tube: See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/6840

Note: Links to docs will display an error until the docs builds have been completed.

:heavy_exclamation_mark: 1 Active SEVs

There are 1 currently active SEVs. If your PR is affected, please view them below:

:x: 1 New Failure

As of commit e5e7008761fd5866966640a75498e4cbaddd01c2 with merge base 7b03a8b2249699d7f547e3101d30964af5f007ba (image):

NEW FAILURE - The following job has failed:

* [Lint / lintrunner / linux-job](https://hud.pytorch.org/pr/pytorch/executorch/6840#32957248384) ([gh](https://github.com/pytorch/executorch/actions/runs/11827998485/job/32957248384)) `>>> Lint for exir/memory_planning.py:`

This comment was automatically generated by Dr. CI and updates every 15 minutes.

facebook-github-bot commented 1 week ago

This pull request was exported from Phabricator. Differential Revision: D65915559