Open sxu opened 1 week ago
Note: Links to docs will display an error until the docs builds have been completed.
There are 1 currently active SEVs. If your PR is affected, please view them below:
As of commit e5e7008761fd5866966640a75498e4cbaddd01c2 with merge base 7b03a8b2249699d7f547e3101d30964af5f007ba ():
* [Lint / lintrunner / linux-job](https://hud.pytorch.org/pr/pytorch/executorch/6840#32957248384) ([gh](https://github.com/pytorch/executorch/actions/runs/11827998485/job/32957248384)) `>>> Lint for exir/memory_planning.py:`
This comment was automatically generated by Dr. CI and updates every 15 minutes.
This pull request was exported from Phabricator. Differential Revision: D65915559
Summary: It's less error prone to have the buffer sizes passed as parameter and return value than implicitly updated via
nonlocal
or reference stored on submodule. Fix a bug where if a new buffer is introduced within a submodule it gets ignored by the top levelapply_algo
call.Differential Revision: D65915559